WorldWideScience

Sample records for sampling estimate comparisons

  1. Comparison of chlorzoxazone one-sample methods to estimate CYP2E1 activity in humans

    DEFF Research Database (Denmark)

    Kramer, Iza; Dalhoff, Kim; Clemmesen, Jens O

    2003-01-01

    OBJECTIVE: Comparison of a one-sample with a multi-sample method (the metabolic fractional clearance) to estimate CYP2E1 activity in humans. METHODS: Healthy, male Caucasians ( n=19) were included. The multi-sample fractional clearance (Cl(fe)) of chlorzoxazone was compared with one...... estimates, Cl(est) at 3 h or 6 h, and MR at 3 h, can serve as reliable markers of CYP2E1 activity. The one-sample clearance method is an accurate, renal function-independent measure of the intrinsic activity; it is simple to use and easily applicable to humans.......-time-point clearance estimation (Cl(est)) at 3, 4, 5 and 6 h. Furthermore, the metabolite/drug ratios (MRs) estimated from one-time-point samples at 1, 2, 3, 4, 5 and 6 h were compared with Cl(fe). RESULTS: The concordance between Cl(est) and Cl(fe) was highest at 6 h. The minimal mean prediction error (MPE) of Cl...

  2. Comparison of Four Estimators under sampling without Replacement

    African Journals Online (AJOL)

    The results were obtained using a program written in Microsoft Visual C++ programming language. It was observed that the two-stage sampling under unequal probabilities without replacement is always better than the other three estimators considered. Keywords: Unequal probability sampling, two-stage sampling, ...

  3. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  4. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    Science.gov (United States)

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  5. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    Directory of Open Access Journals (Sweden)

    Shanyou Zhu

    2014-01-01

    Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  6. Comparison of distance sampling estimates to a known population ...

    African Journals Online (AJOL)

    Line-transect sampling was used to obtain abundance estimates of an Ant-eating Chat Myrmecocichla formicivora population to compare these with the true size of the population. The population size was determined by a long-term banding study, and abundance estimates were obtained by surveying line transects.

  7. Finite Sample Comparison of Parametric, Semiparametric, and Wavelet Estimators of Fractional Integration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ø.; Frederiksen, Per Houmann

    2005-01-01

    In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...

  8. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  9. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    Science.gov (United States)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  10. Parameter sampling capabilities of sequential and simultaneous data assimilation: I. Analytical comparison

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess the parameter sampling capabilities of some Bayesian, ensemble-based, joint state-parameter (JS) estimation methods. The forward model is assumed to be non-chaotic and have nonlinear components, and the emphasis is on results obtained for the parameters in the state-parameter vector. A variety of approximate sampling methods exist, and a number of numerical comparisons between such methods have been performed. Often, more than one of the defining characteristics vary from one method to another, so it can be difficult to point out which characteristic of the more successful method in such a comparison was decisive. In this study, we single out one defining characteristic for comparison; whether or not data are assimilated sequentially or simultaneously. The current paper is concerned with analytical investigations into this issue. We carefully select one sequential and one simultaneous JS method for the comparison. We also design a corresponding pair of pure parameter estimation methods, and we show how the JS methods and the parameter estimation methods are pairwise related. It is shown that the sequential and the simultaneous parameter estimation methods are equivalent for one particular combination of observations with different degrees of nonlinearity. Strong indications are presented for why one may expect the sequential parameter estimation method to outperform the simultaneous parameter estimation method for all other combinations of observations. Finally, the conditions for when similar relations can be expected to hold between the corresponding JS methods are discussed. A companion paper, part II (Fossum and Mannseth 2014 Inverse Problems 30 114003), is concerned with statistical analysis of results from a range of numerical experiments involving sequential and simultaneous JS estimation, where the design of the numerical investigation is motivated by our findings in the current paper. (paper)

  11. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  12. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  13. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  14. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  15. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  16. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  17. Comparison of prevalence estimation of Mycobacterium avium subsp. paratuberculosis infection by sampling slaughtered cattle with macroscopic lesions vs. systematic sampling.

    Science.gov (United States)

    Elze, J; Liebler-Tenorio, E; Ziller, M; Köhler, H

    2013-07-01

    The objective of this study was to identify the most reliable approach for prevalence estimation of Mycobacterium avium ssp. paratuberculosis (MAP) infection in clinically healthy slaughtered cattle. Sampling of macroscopically suspect tissue was compared to systematic sampling. Specimens of ileum, jejunum, mesenteric and caecal lymph nodes were examined for MAP infection using bacterial microscopy, culture, histopathology and immunohistochemistry. MAP was found most frequently in caecal lymph nodes, but sampling more tissues optimized the detection rate. Examination by culture was most efficient while combination with histopathology increased the detection rate slightly. MAP was detected in 49/50 animals with macroscopic lesions representing 1.35% of the slaughtered cattle examined. Of 150 systematically sampled macroscopically non-suspect cows, 28.7% were infected with MAP. This indicates that the majority of MAP-positive cattle are slaughtered without evidence of macroscopic lesions and before clinical signs occur. For reliable prevalence estimation of MAP infection in slaughtered cattle, systematic random sampling is essential.

  18. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  19. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    Science.gov (United States)

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  20. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  1. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  2. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  3. An inter-lab comparison determination of radionuclides in soil samples by γ-apectrometry

    International Nuclear Information System (INIS)

    Pan Jingquan; Zhang Shurong; Xu Cuihua

    1986-01-01

    The results of an inter-lab comparison of quantitative determination of radionuclides in two soil samples and in an imitated one used as standard reference material by direct γ-spectrometry are presented and discussed. The methods of preparation of the three samples, its homogeneity and the procedures used in this inter-lab comparison are also described. Fifteen laboratories in China participated in this program. The contents of main radionuclides in the samples were estimated by statistical treatment of the reproted data. More than 91% of these laboratories obtained mean values with relative standard deviation below 20%, and in 88% of them the average values we e within the range of the standard reference values with deviation less than 10%. Statistical analysis showed that random error might be underestimated or systematic error might exist in a few laboratories

  4. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  5. Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.

    Science.gov (United States)

    Auerbach, Benjamin M; Ruff, Christopher B

    2004-12-01

    In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.

  6. Comparison of surface fractal dimensions of chromizing coating and P110 steel for corrosion resistance estimation

    International Nuclear Information System (INIS)

    Lin, Naiming; Guo, Junwen; Xie, Faqin; Zou, Jiaojuan; Tian, Wei; Yao, Xiaofei; Zhang, Hongyan; Tang, Bin

    2014-01-01

    Highlights: • Continuous chromizing coating was synthesized on P110 steel by pack cementation. • The chromizing coating showed better corrosion resistance. • Comparison of surface fractal dimensions can estimate corrosion resistance. - Abstract: In the field of corrosion research, mass gain/loss, electrochemical tests and comparing the surface elemental distributions, phase constitutions as well as surface morphologies before and after corrosion are extensively applied to investigate the corrosion behavior or estimate the corrosion resistance of materials that operated in various environments. Most of the above methods are problem oriented, complex and longer-period time-consuming. However from an object oriented point of view, the corroded surfaces of materials often have self-similar characterization: fractal property which can be employed to efficiently achieve damaged surface analysis. The present work describes a strategy of comparison of the surface fractal dimensions for corrosion resistance estimation: chromizing coating was synthesized on P110 steel surface to improve its performance via pack cementation. Scanning electron microscope (SEM) was used to investigate the surface morphologies of the original and corroded samples. Surface fractal dimensions of the detected samples were calculated by binary images related to SEM images of surface morphologies with box counting algorithm method. The results showed that both surface morphologies and surface fractal dimensions of P110 steel varied greatly before and after corrosion test, but the chromizing coating changed slightly. The chromizing coating indicated better corrosion resistance than P110 steel. Comparison of surface fractal dimensions of original and corroded samples can rapidly and exactly realize the estimation of corrosion resistance

  7. Comparison of surface fractal dimensions of chromizing coating and P110 steel for corrosion resistance estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Naiming, E-mail: lnmlz33@163.com [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China); Guo, Junwen [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China); Xie, Faqin [School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072 (China); Zou, Jiaojuan; Tian, Wei [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China); Yao, Xiaofei [School of Materials and Chemical Engineering, Xi’an Technological University, Xi’an 710032 (China); Zhang, Hongyan; Tang, Bin [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China)

    2014-08-30

    Highlights: • Continuous chromizing coating was synthesized on P110 steel by pack cementation. • The chromizing coating showed better corrosion resistance. • Comparison of surface fractal dimensions can estimate corrosion resistance. - Abstract: In the field of corrosion research, mass gain/loss, electrochemical tests and comparing the surface elemental distributions, phase constitutions as well as surface morphologies before and after corrosion are extensively applied to investigate the corrosion behavior or estimate the corrosion resistance of materials that operated in various environments. Most of the above methods are problem oriented, complex and longer-period time-consuming. However from an object oriented point of view, the corroded surfaces of materials often have self-similar characterization: fractal property which can be employed to efficiently achieve damaged surface analysis. The present work describes a strategy of comparison of the surface fractal dimensions for corrosion resistance estimation: chromizing coating was synthesized on P110 steel surface to improve its performance via pack cementation. Scanning electron microscope (SEM) was used to investigate the surface morphologies of the original and corroded samples. Surface fractal dimensions of the detected samples were calculated by binary images related to SEM images of surface morphologies with box counting algorithm method. The results showed that both surface morphologies and surface fractal dimensions of P110 steel varied greatly before and after corrosion test, but the chromizing coating changed slightly. The chromizing coating indicated better corrosion resistance than P110 steel. Comparison of surface fractal dimensions of original and corroded samples can rapidly and exactly realize the estimation of corrosion resistance.

  8. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  9. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  10. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  11. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  12. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Science.gov (United States)

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  13. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  14. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  15. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  16. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    Full Text Available Scanning Electron Microscope (SEM as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D. In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  17. Comparison of T-Square, Point Centered Quarter, and N-Tree Sampling Methods in Pittosporum undulatum Invaded Woodlands

    Directory of Open Access Journals (Sweden)

    Lurdes Borges Silva

    2017-01-01

    Full Text Available Tree density is an important parameter affecting ecosystems functions and management decisions, while tree distribution patterns affect sampling design. Pittosporum undulatum stands in the Azores are being targeted with a biomass valorization program, for which efficient tree density estimators are required. We compared T-Square sampling, Point Centered Quarter Method (PCQM, and N-tree sampling with benchmark quadrat (QD sampling in six 900 m2 plots established at P. undulatum stands in São Miguel Island. A total of 15 estimators were tested using a data resampling approach. The estimated density range (344–5056 trees/ha was found to agree with previous studies using PCQM only. Although with a tendency to underestimate tree density (in comparison with QD, overall, T-Square sampling appeared to be the most accurate and precise method, followed by PCQM. Tree distribution pattern was found to be slightly aggregated in 4 of the 6 stands. Considering (1 the low level of bias and high precision, (2 the consistency among three estimators, (3 the possibility of use with aggregated patterns, and (4 the possibility of obtaining a larger number of independent tree parameter estimates, we recommend the use of T-Square sampling in P. undulatum stands within the framework of a biomass valorization program.

  18. Peer influence on students' estimates of performance: social comparison in clinical rotations.

    Science.gov (United States)

    Raat, A N Janet; Kuks, Jan B M; van Hell, E Ally; Cohen-Schotanus, Janke

    2013-02-01

    During clinical rotations, students move from one clinical situation to another. Questions exist about students' strategies for coping with these transitions. These strategies may include a process of social comparison because in this context it offers the student an opportunity to estimate his or her abilities to master a novel rotation. These estimates are relevant for learning and performance because they are related to self-efficacy. We investigated whether student estimates of their own future performance are influenced by the performance level and gender of the peer with whom the student compares him- or herself. We designed an experimental study in which participating students (n = 321) were divided into groups assigned to 12 different conditions. Each condition entailed a written comparison situation in which a peer student had completed the rotation the participant was required to undertake next. Differences between conditions were determined by the performance level (worse, similar or better) and gender of the comparison peer. The overall grade achieved by the comparison peer remained the same in all conditions. We asked participants to estimate their own future performance in that novel rotation. Differences between their estimates were analysed using analysis of variance (ANOVA). Students' estimates of their future performance were highest when the comparison peer was presented as performing less well and lowest when the comparison peer was presented as performing better (p influences students' estimates of their future performance in a novel rotation. The effect depends on the performance level and gender of the comparison peer. This indicates that comparisons against particular peers may strengthen or diminish a student's self-efficacy, which, in turn, may ease or hamper the student's learning during clinical rotations. The study is limited by its experimental design. Future research should focus on students' comparison behaviour in real transitions

  19. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  20. Estimation of spacial geo-stress components in rock samples by using the Kaiser effect of acoustic emission

    International Nuclear Information System (INIS)

    Kanagawa, Tadashi; Hayashi, Masao; Nakasa, Hiroyasu.

    1976-01-01

    The spacial remaining stress component of the rock core sample is experimentally obtained by using Kaiser effect of acoustic emission (AE), and the estimated ground pressure is compared with the natural ground pressure measured by the conventional over-coring method, in order to see the feasiblity of AE method. In this experiments of AE, 111 specimens were cut out in all directions of the rock cores (tuff) sampled from the place where the ground pressure was measured by the over-coring method, and the generation of AE caused by the load was measured. Whereby, the stress components in three directions are determined. As a result of comparison, t the AE method is proved to be effective enough to estimate the ground pressure of rock geo-dynamically. In the application of the Kaiser effect to the estimation of the geo-stress in rock samples, one of the most difficult problems is how to eliminate the obstruction of erroneous AE signals caused by the strong stress concentration at the end corners of the rock specimen. As the result of comparison, the values obtained by the AE method have a tendency of greater than the values obtained by the over-coring method. It is conceived that the AE method can easily detect the maximum stress value for geo historical long time, and that the stress concentration is apt to mix in AE method by boring. (Iwakiri, K.)

  1. An empirical comparison of isolate-based and sample-based definitions of antimicrobial resistance and their effect on estimates of prevalence.

    Science.gov (United States)

    Humphry, R W; Evans, J; Webster, C; Tongue, S C; Innocent, G T; Gunn, G J

    2018-02-01

    Antimicrobial resistance is primarily a problem in human medicine but there are unquantified links of transmission in both directions between animal and human populations. Quantitative assessment of the costs and benefits of reduced antimicrobial usage in livestock requires robust quantification of transmission of resistance between animals, the environment and the human population. This in turn requires appropriate measurement of resistance. To tackle this we selected two different methods for determining whether a sample is resistant - one based on screening a sample, the other on testing individual isolates. Our overall objective was to explore the differences arising from choice of measurement. A literature search demonstrated the widespread use of testing of individual isolates. The first aim of this study was to compare, quantitatively, sample level and isolate level screening. Cattle or sheep faecal samples (n=41) submitted for routine parasitology were tested for antimicrobial resistance in two ways: (1) "streak" direct culture onto plates containing the antimicrobial of interest; (2) determination of minimum inhibitory concentration (MIC) of 8-10 isolates per sample compared to published MIC thresholds. Two antibiotics (ampicillin and nalidixic acid) were tested. With ampicillin, direct culture resulted in more than double the number of resistant samples than the MIC method based on eight individual isolates. The second aim of this study was to demonstrate the utility of the observed relationship between these two measures of antimicrobial resistance to re-estimate the prevalence of antimicrobial resistance from a previous study, in which we had used "streak" cultures. Boot-strap methods were used to estimate the proportion of samples that would have tested resistant in the historic study, had we used the isolate-based MIC method instead. Our boot-strap results indicate that our estimates of prevalence of antimicrobial resistance would have been

  2. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  3. A geostatistical estimation of zinc grade in bore-core samples

    International Nuclear Information System (INIS)

    Starzec, A.

    1987-01-01

    Possibilities and preliminary results of geostatistical interpretation of the XRF determination of zinc in bore-core samples are considered. For the spherical model of the variogram the estimation variance of grade in a disk-shape sample (estimated from the grade on the circumference sample) is calculated. Variograms of zinc grade in core samples are presented and examples of the grade estimation are discussed. 4 refs., 7 figs., 1 tab. (author)

  4. Comparison and assessment of aerial and ground estimates of waterbird colonies

    Science.gov (United States)

    Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.

    2008-01-01

    Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies

  5. Poisson sampling - The adjusted and unadjusted estimator revisited

    Science.gov (United States)

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  6. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  7. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  8. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, C. M.; Buchhave, P.; K. George, W.

    algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...

  10. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  11. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  12. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  13. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights.

    Directory of Open Access Journals (Sweden)

    Patrick Habecker

    Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.

  14. Estimation of creatinine in Urine sample by Jaffe's method

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Arunkumar, Suja; Sawant, Pramilla D.; Rao, B.B.

    2012-01-01

    In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration

  15. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  16. Simultaneous small-sample comparisons in longitudinal or multi-endpoint trials using multiple marginal models

    DEFF Research Database (Denmark)

    Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A

    2018-01-01

    , however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...

  17. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  18. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    Science.gov (United States)

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  19. Accuracy of the estimation of dental age in comparison with chronological age in a Spanish sample of 2641 living subjects using the Demirjian and Nolla methods.

    Science.gov (United States)

    Melo, María; Ata-Ali, Javier

    2017-01-01

    Age estimation is an important procedure in forensic medicine and is carried out for a number of reasons. For living persons, age estimation is performed in order to assess whether a child has attained the age of criminal responsibility, in scenarios involving rape, kidnapping or marriage, in premature births, adoption procedures, illegal immigration, pediatric endocrine diseases and orthodontic malocclusion, as well as in circumstances in which the birth certificate is not available or the records are suspect. According to data from the UNHCR (United Nations High Commissioner for Refugees), the number of people seeking refugee status continued to increase in the last years, driven by the wars in Syria and Iraq, as well as by conflict and instability in Afghanistan, Eritrea and elsewhere. The objective of this study is to compare the accuracy of estimating dental age versus chronological age using the Nolla and Demirjian methods in a Spanish population. A final sample of 2641 panoramic X-rays corresponding to Spanish patients (1322 males and 1319 females) between 7-21 years of age was analyzed. Dental age was assessed using the Nolla and Demirjian methods, establishing comparisons with mean chronological age based on the Student t-test for paired samples, followed by the generation of a linear regression model. Both methods showed slight discrepancy between dental and chronological age. On examining the reproducibility of the Nolla and Demirjian methods, technical errors of 0.84% and 0.62%, respectively, were observed. On average, the Nolla method was found to estimate an age 0.213years younger than the chronological age, while the Demirjian method estimated an age 0.853years older than the chronological age. Linear combination of the mean Nolla and Demirjian estimates increased the predictive capacity to 99.2%. In conclusion the Nolla and Demirjian methods were found to be accurate in estimating chronological age from dental age in a Spanish population. The error

  20. Prevalence of HIV among MSM in Europe: comparison of self-reported diagnoses from a large scale internet survey and existing national estimates

    Directory of Open Access Journals (Sweden)

    Marcus Ulrich

    2012-11-01

    Full Text Available Abstract Background Country level comparisons of HIV prevalence among men having sex with men (MSM is challenging for a variety of reasons, including differences in the definition and measurement of the denominator group, recruitment strategies and the HIV detection methods. To assess their comparability, self-reported data on HIV diagnoses in a 2010 pan-European MSM internet survey (EMIS were compared with pre-existing estimates of HIV prevalence in MSM from a variety of European countries. Methods The first pan-European survey of MSM recruited more than 180,000 men from 38 countries across Europe and included questions on the year and result of last HIV test. HIV prevalence as measured in EMIS was compared with national estimates of HIV prevalence based on studies using biological measurements or modelling approaches to explore the degree of agreement between different methods. Existing estimates were taken from Dublin Declaration Monitoring Reports or UNAIDS country fact sheets, and were verified by contacting the nominated contact points for HIV surveillance in EU/EEA countries. Results The EMIS self-reported measurements of HIV prevalence were strongly correlated with existing estimates based on biological measurement and modelling studies using surveillance data (R2=0.70 resp. 0.72. In most countries HIV positive MSM appeared disproportionately likely to participate in EMIS, and prevalences as measured in EMIS are approximately twice the estimates based on existing estimates. Conclusions Comparison of diagnosed HIV prevalence as measured in EMIS with pre-existing estimates based on biological measurements using varied sampling frames (e.g. Respondent Driven Sampling, Time and Location Sampling demonstrates a high correlation and suggests similar selection biases from both types of studies. For comparison with modelled estimates the self-selection bias of the Internet survey with increased participation of men diagnosed with HIV has to be

  1. An online method for lithium-ion battery remaining useful life estimation using importance sampling and neural networks

    International Nuclear Information System (INIS)

    Wu, Ji; Zhang, Chenbin; Chen, Zonghai

    2016-01-01

    Highlights: • An online RUL estimation method for lithium-ion battery is proposed. • RUL is described by the difference among battery terminal voltage curves. • A feed forward neural network is employed for RUL estimation. • Importance sampling is utilized to select feed forward neural network inputs. - Abstract: An accurate battery remaining useful life (RUL) estimation can facilitate the design of a reliable battery system as well as the safety and reliability of actual operation. A reasonable definition and an effective prediction algorithm are indispensable for the achievement of an accurate RUL estimation result. In this paper, the analysis of battery terminal voltage curves under different cycle numbers during charge process is utilized for RUL definition. Moreover, the relationship between RUL and charge curve is simulated by feed forward neural network (FFNN) for its simplicity and effectiveness. Considering the nonlinearity of lithium-ion charge curve, importance sampling (IS) is employed for FFNN input selection. Based on these results, an online approach using FFNN and IS is presented to estimate lithium-ion battery RUL in this paper. Experiments and numerical comparisons are conducted to validate the proposed method. The results show that the FFNN with IS is an accurate estimation method for actual operation.

  2. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  3. Multi-sample nonparametric treatments comparison in medical ...

    African Journals Online (AJOL)

    Multi-sample nonparametric treatments comparison in medical follow-up study with unequal observation processes through simulation and bladder tumour case study. P. L. Tan, N.A. Ibrahim, M.B. Adam, J. Arasan ...

  4. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  5. Abundance, distribution and diversity of gelatinous predators along the northern Mid-Atlantic Ridge: A comparison of different sampling methodologies.

    Directory of Open Access Journals (Sweden)

    Aino Hosia

    Full Text Available The diversity and distribution of gelatinous zooplankton were investigated along the northern Mid-Atlantic Ridge (MAR from June to August 2004.Here, we present results from macrozooplankton trawl sampling, as well as comparisons made between five different methodologies that were employed during the MAR-ECO survey. In total, 16 species of hydromedusae, 31 species of siphonophores and four species of scyphozoans were identified to species level from macrozooplankton trawl samples. Additional taxa were identified to higher taxonomic levels and a single ctenophore genus was observed. Samples were collected at 17 stations along the MAR between the Azores and Iceland. A divergence in the species assemblages was observed at the southern limit of the Subpolar Frontal Zone. The catch composition of gelatinous zooplankton is compared between different sampling methodologies including: a macrozooplankton trawl; a Multinet; a ringnet attached to bottom trawl; and optical platforms (Underwater Video Profiler (UVP & Remotely Operated Vehicle (ROV. Different sampling methodologies are shown to exhibit selectivity towards different groups of gelatinous zooplankton. Only ~21% of taxa caught during the survey were caught by both the macrozooplankton trawl and the Multinet when deployed at the same station. The estimates of gelatinous zooplankton abundance calculated using these two gear types also varied widely (1.4 ± 0.9 individuals 1000 m-3 estimated by the macrozooplankton trawl vs. 468.3 ± 315.4 individuals 1000 m-3 estimated by the Multinet (mean ± s.d. when used at the same stations (n = 6. While it appears that traditional net sampling can generate useful data on pelagic cnidarians, comparisons with results from the optical platforms suggest that ctenophore diversity and abundance are consistently underestimated, particularly when net sampling is conducted in combination with formalin fixation. The results emphasise the importance of considering

  6. Abundance, distribution and diversity of gelatinous predators along the northern Mid-Atlantic Ridge: A comparison of different sampling methodologies

    Science.gov (United States)

    Falkenhaug, Tone; Baxter, Emily J.

    2017-01-01

    The diversity and distribution of gelatinous zooplankton were investigated along the northern Mid-Atlantic Ridge (MAR) from June to August 2004.Here, we present results from macrozooplankton trawl sampling, as well as comparisons made between five different methodologies that were employed during the MAR-ECO survey. In total, 16 species of hydromedusae, 31 species of siphonophores and four species of scyphozoans were identified to species level from macrozooplankton trawl samples. Additional taxa were identified to higher taxonomic levels and a single ctenophore genus was observed. Samples were collected at 17 stations along the MAR between the Azores and Iceland. A divergence in the species assemblages was observed at the southern limit of the Subpolar Frontal Zone. The catch composition of gelatinous zooplankton is compared between different sampling methodologies including: a macrozooplankton trawl; a Multinet; a ringnet attached to bottom trawl; and optical platforms (Underwater Video Profiler (UVP) & Remotely Operated Vehicle (ROV)). Different sampling methodologies are shown to exhibit selectivity towards different groups of gelatinous zooplankton. Only ~21% of taxa caught during the survey were caught by both the macrozooplankton trawl and the Multinet when deployed at the same station. The estimates of gelatinous zooplankton abundance calculated using these two gear types also varied widely (1.4 ± 0.9 individuals 1000 m-3 estimated by the macrozooplankton trawl vs. 468.3 ± 315.4 individuals 1000 m-3 estimated by the Multinet (mean ± s.d.) when used at the same stations (n = 6). While it appears that traditional net sampling can generate useful data on pelagic cnidarians, comparisons with results from the optical platforms suggest that ctenophore diversity and abundance are consistently underestimated, particularly when net sampling is conducted in combination with formalin fixation. The results emphasise the importance of considering sampling methodology

  7. Abundance, distribution and diversity of gelatinous predators along the northern Mid-Atlantic Ridge: A comparison of different sampling methodologies.

    Science.gov (United States)

    Hosia, Aino; Falkenhaug, Tone; Baxter, Emily J; Pagès, Francesc

    2017-01-01

    The diversity and distribution of gelatinous zooplankton were investigated along the northern Mid-Atlantic Ridge (MAR) from June to August 2004.Here, we present results from macrozooplankton trawl sampling, as well as comparisons made between five different methodologies that were employed during the MAR-ECO survey. In total, 16 species of hydromedusae, 31 species of siphonophores and four species of scyphozoans were identified to species level from macrozooplankton trawl samples. Additional taxa were identified to higher taxonomic levels and a single ctenophore genus was observed. Samples were collected at 17 stations along the MAR between the Azores and Iceland. A divergence in the species assemblages was observed at the southern limit of the Subpolar Frontal Zone. The catch composition of gelatinous zooplankton is compared between different sampling methodologies including: a macrozooplankton trawl; a Multinet; a ringnet attached to bottom trawl; and optical platforms (Underwater Video Profiler (UVP) & Remotely Operated Vehicle (ROV)). Different sampling methodologies are shown to exhibit selectivity towards different groups of gelatinous zooplankton. Only ~21% of taxa caught during the survey were caught by both the macrozooplankton trawl and the Multinet when deployed at the same station. The estimates of gelatinous zooplankton abundance calculated using these two gear types also varied widely (1.4 ± 0.9 individuals 1000 m-3 estimated by the macrozooplankton trawl vs. 468.3 ± 315.4 individuals 1000 m-3 estimated by the Multinet (mean ± s.d.) when used at the same stations (n = 6). While it appears that traditional net sampling can generate useful data on pelagic cnidarians, comparisons with results from the optical platforms suggest that ctenophore diversity and abundance are consistently underestimated, particularly when net sampling is conducted in combination with formalin fixation. The results emphasise the importance of considering sampling methodology

  8. Estimating abundance of mountain lions from unstructured spatial sampling

    Science.gov (United States)

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and

  9. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  10. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  11. Multisensor sampling of pelagic ecosystem variables in a coastal environment to estimate zooplankton grazing impact

    Science.gov (United States)

    Sutton, Tracey; Hopkins, Thomas; Remsen, Andrew; Burghart, Scott

    2001-01-01

    Sampling was conducted on the west Florida continental shelf ecosystem modeling site to estimate zooplankton grazing impact on primary production. Samples were collected with the high-resolution sampler, a towed array bearing electronic and optical sensors operating in tandem with a paired net/bottle verification system. A close biological-physical coupling was observed, with three main plankton communities: 1. a high-density inshore community dominated by larvaceans coincident with a salinity gradient; 2. a low-density offshore community dominated by small calanoid copepods coincident with the warm mixed layer; and 3. a high-density offshore community dominated by small poecilostomatoid and cyclopoid copepods and ostracods coincident with cooler, sub-pycnocline oceanic water. Both high-density communities were associated with relatively turbid water. Applying available grazing rates from the literature to our abundance data, grazing pressure mirrored the above bio-physical pattern, with the offshore sub-pycnocline community contributing ˜65% of grazing pressure despite representing only 19% of the total volume of the transect. This suggests that grazing pressure is highly localized, emphasizing the importance of high-resolution sampling to better understand plankton dynamics. A comparison of our grazing rate estimates with primary production estimates suggests that mesozooplankton do not control the fate of phytoplankton over much of the area studied (<5% grazing of daily primary production), but "hot spots" (˜25-50% grazing) do occur which may have an effect on floral composition.

  12. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Bayesian Simultaneous Estimation for Means in k Sample Problems

    OpenAIRE

    Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay

    2017-01-01

    This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...

  14. Turbidity-controlled sampling for suspended sediment load estimation

    Science.gov (United States)

    Jack Lewis

    2003-01-01

    Abstract - Automated data collection is essential to effectively measure suspended sediment loads in storm events, particularly in small basins. Continuous turbidity measurements can be used, along with discharge, in an automated system that makes real-time sampling decisions to facilitate sediment load estimation. The Turbidity Threshold Sampling method distributes...

  15. Estimating mean change in population salt intake using spot urine samples.

    Science.gov (United States)

    Petersen, Kristina S; Wu, Jason H Y; Webster, Jacqui; Grimes, Carley; Woodward, Mark; Nowson, Caryl A; Neal, Bruce

    2017-10-01

    Spot urine samples are easier to collect than 24-h urine samples and have been used with estimating equations to derive the mean daily salt intake of a population. Whether equations using data from spot urine samples can also be used to estimate change in mean daily population salt intake over time is unknown. We compared estimates of change in mean daily population salt intake based upon 24-h urine collections with estimates derived using equations based on spot urine samples. Paired and unpaired 24-h urine samples and spot urine samples were collected from individuals in two Australian populations, in 2011 and 2014. Estimates of change in daily mean population salt intake between 2011 and 2014 were obtained directly from the 24-h urine samples and by applying established estimating equations (Kawasaki, Tanaka, Mage, Toft, INTERSALT) to the data from spot urine samples. Differences between 2011 and 2014 were calculated using mixed models. A total of 1000 participants provided a 24-h urine sample and a spot urine sample in 2011, and 1012 did so in 2014 (paired samples n = 870; unpaired samples n = 1142). The participants were community-dwelling individuals living in the State of Victoria or the town of Lithgow in the State of New South Wales, Australia, with a mean age of 55 years in 2011. The mean (95% confidence interval) difference in population salt intake between 2011 and 2014 determined from the 24-h urine samples was -0.48g/day (-0.74 to -0.21; P spot urine samples was -0.24 g/day (-0.42 to -0.06; P = 0.01) using the Tanaka equation, -0.42 g/day (-0.70 to -0.13; p = 0.004) using the Kawasaki equation, -0.51 g/day (-1.00 to -0.01; P = 0.046) using the Mage equation, -0.26 g/day (-0.42 to -0.10; P = 0.001) using the Toft equation, -0.20 g/day (-0.32 to -0.09; P = 0.001) using the INTERSALT equation and -0.27 g/day (-0.39 to -0.15; P  0.058). Separate analysis of the unpaired and paired data showed that detection of

  16. Estimating waste disposal quantities from raw waste samples

    International Nuclear Information System (INIS)

    Negin, C.A.; Urland, C.S.; Hitz, C.G.; GPU Nuclear Corp., Middletown, PA)

    1985-01-01

    Estimating the disposal quantity of waste resulting from stabilization of radioactive sludge is complex because of the many factors relating to sample analysis results, radioactive decay, allowable disposal concentrations, and options for disposal containers. To facilitate this estimation, a microcomputer spread sheet template was created. The spread sheet has saved considerable engineering hours. 1 fig., 3 tabs

  17. Iterative importance sampling algorithms for parameter estimation

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

  18. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  19. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    Science.gov (United States)

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  20. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  1. Comparison of bias-corrected covariance estimators for MMRM analysis in longitudinal data with dropouts.

    Science.gov (United States)

    Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori

    2017-10-01

    In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications

  2. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    Science.gov (United States)

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  3. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  4. Estimating population salt intake in India using spot urine samples.

    Science.gov (United States)

    Petersen, Kristina S; Johnson, Claire; Mohan, Sailesh; Rogers, Kris; Shivashankar, Roopa; Thout, Sudhir Raj; Gupta, Priti; He, Feng J; MacGregor, Graham A; Webster, Jacqui; Santos, Joseph Alvin; Krishnan, Anand; Maulik, Pallab K; Reddy, K Srinath; Gupta, Ruby; Prabhakaran, Dorairaj; Neal, Bruce

    2017-11-01

    To compare estimates of mean population salt intake in North and South India derived from spot urine samples versus 24-h urine collections. In a cross-sectional survey, participants were sampled from slum, urban and rural communities in North and in South India. Participants provided 24-h urine collections, and random morning spot urine samples. Salt intake was estimated from the spot urine samples using a series of established estimating equations. Salt intake data from the 24-h urine collections and spot urine equations were weighted to provide estimates of salt intake for Delhi and Haryana, and Andhra Pradesh. A total of 957 individuals provided a complete 24-h urine collection and a spot urine sample. Weighted mean salt intake based on the 24-h urine collection, was 8.59 (95% confidence interval 7.73-9.45) and 9.46 g/day (8.95-9.96) in Delhi and Haryana, and Andhra Pradesh, respectively. Corresponding estimates based on the Tanaka equation [9.04 (8.63-9.45) and 9.79 g/day (9.62-9.96) for Delhi and Haryana, and Andhra Pradesh, respectively], the Mage equation [8.80 (7.67-9.94) and 10.19 g/day (95% CI 9.59-10.79)], the INTERSALT equation [7.99 (7.61-8.37) and 8.64 g/day (8.04-9.23)] and the INTERSALT equation with potassium [8.13 (7.74-8.52) and 8.81 g/day (8.16-9.46)] were all within 1 g/day of the estimate based upon 24-h collections. For the Toft equation, estimates were 1-2 g/day higher [9.94 (9.24-10.64) and 10.69 g/day (9.44-11.93)] and for the Kawasaki equation they were 3-4 g/day higher [12.14 (11.30-12.97) and 13.64 g/day (13.15-14.12)]. In urban and rural areas in North and South India, most spot urine-based equations provided reasonable estimates of mean population salt intake. Equations that did not provide good estimates may have failed because specimen collection was not aligned with the original method.

  5. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  6. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    2003-01-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be

  7. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    Science.gov (United States)

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  8. Spatially explicit population estimates for black bears based on cluster sampling

    Science.gov (United States)

    Humm, J.; McCown, J. Walter; Scheick, B.K.; Clark, Joseph D.

    2017-01-01

    We estimated abundance and density of the 5 major black bear (Ursus americanus) subpopulations (i.e., Eglin, Apalachicola, Osceola, Ocala-St. Johns, Big Cypress) in Florida, USA with spatially explicit capture-mark-recapture (SCR) by extracting DNA from hair samples collected at barbed-wire hair sampling sites. We employed a clustered sampling configuration with sampling sites arranged in 3 × 3 clusters spaced 2 km apart within each cluster and cluster centers spaced 16 km apart (center to center). We surveyed all 5 subpopulations encompassing 38,960 km2 during 2014 and 2015. Several landscape variables, most associated with forest cover, helped refine density estimates for the 5 subpopulations we sampled. Detection probabilities were affected by site-specific behavioral responses coupled with individual capture heterogeneity associated with sex. Model-averaged bear population estimates ranged from 120 (95% CI = 59–276) bears or a mean 0.025 bears/km2 (95% CI = 0.011–0.44) for the Eglin subpopulation to 1,198 bears (95% CI = 949–1,537) or 0.127 bears/km2 (95% CI = 0.101–0.163) for the Ocala-St. Johns subpopulation. The total population estimate for our 5 study areas was 3,916 bears (95% CI = 2,914–5,451). The clustered sampling method coupled with information on land cover was efficient and allowed us to estimate abundance across extensive areas that would not have been possible otherwise. Clustered sampling combined with spatially explicit capture-recapture methods has the potential to provide rigorous population estimates for a wide array of species that are extensive and heterogeneous in their distribution.

  9. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  10. The use of Thompson sampling to increase estimation precision

    NARCIS (Netherlands)

    Kaptein, M.C.

    2015-01-01

    In this article, we consider a sequential sampling scheme for efficient estimation of the difference between the means of two independent treatments when the population variances are unequal across groups. The sampling scheme proposed is based on a solution to bandit problems called Thompson

  11. Estimation of river and stream temperature trends under haphazard sampling

    Science.gov (United States)

    Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao

    2015-01-01

    Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.

  12. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  13. The use of EURACHEM guide for comparison of two 210Pb determination methods in solid environmental samples

    International Nuclear Information System (INIS)

    Al-Masri, M. S.; Hassan, M.; Amin, Y.

    2008-07-01

    Two techniques for determination of 210 Pb in solid environmental samples have been validated and compared according to Eurachem Guide on method validation. The first technique depended on determination of 210 Po, which equilibrium with 210 Pb, by platting it onto a rotating silver disc. Then, Alpha counting of 210 Po was done using an alpha spectrometer. On the other hand, according to its decay scheme, 210 Pb was measured directly through gamma spectrometry by measuring the 46.5 keV. Detection limits, reproducibility and recovery coefficient were the main validation parameters. In addition, uncertainties of measurement were estimated and compared for the two techniques. The comparison results have shown that, the activity of 210 Pb in the environmental samples can choose which technique is appropriated. It was found that Eurachem Guide and comparison of quality statistical validation parameters can be a good tool for selection of the appropriate method for the application. (Authors)

  14. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  15. Estimating plume dispersion: a comparison of several sigma schemes

    International Nuclear Information System (INIS)

    Irwin, J.S.

    1983-01-01

    The lateral and vertical Gaussian plume dispersion parameters are estimated and compared with field tracer data collected at 11 sites. The dispersion parameter schemes used in this analysis include Cramer's scheme, suggested for tall stack dispersion estimates, Draxler's scheme, suggested for elevated and surface releases, Pasquill's scheme, suggested for interim use in dispersion estimates, and the Pasquill--Gifford scheme using Turner's technique for assigning stability categories. The schemes suggested by Cramer, Draxler and Pasquill estimate the dispersion parameters using onsite measurements of the vertical and lateral wind-velocity variances at the effective release height. The performances of these schemes in estimating the dispersion parameters are compared with that of the Pasquill--Gifford scheme, using the Prairie Grass and Karlsruhe data. For these two experiments, the estimates of the dispersion parameters using Draxler's scheme correlate better with the measurements than did estimates using the Pasquill--Gifford scheme. Comparison of the dispersion parameter estimates with the measurement suggests that Draxler's scheme for characterizing the dispersion results in the smallest mean fractional error in the estimated dispersion parameters and the smallest variance of the fractional errors

  16. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  17. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  18. Comparison of the performances of the CS model coil and the Good Joint SULTAN sample

    International Nuclear Information System (INIS)

    Wesche, Rainer; Herzog, Robert; Bruzzone, Pierluigi

    2008-01-01

    The relevance of short sample measurements in SULTAN for the prediction of the performance of the coils of the International Thermonuclear Experimental Reactor (ITER) is assessed using the case of the Nb 3 Sn high-field central solenoid model coil (CSMC) conductor, for which both coil performance and short sample SULTAN results (Good Joint (GJ) sample) are available. A least-squares fit procedure, based on a uniform current distribution among the strands and the Durham scaling relations for the field, temperature and strain dependences of the strand J c provides a thermal strain of -0.294% and a degradation factor of approximately 60% for the GJ sample. In the calculation of the voltage along Layer 1A of the CSMC the hoop stress and the variation of the magnetic field in the conductor cross-section were taken into account. The temperature profile, used in the calculations, is based on published temperature profiles and empirical relations between helium inlet and outlet temperatures. A comparison with the GJ results indicates that short sample measurements in SULTAN provide a conservative estimate of the coil performance

  19. Comparison of Statistically Modeled Contaminated Soil Volume Estimates and Actual Excavation Volumes at the Maywood FUSRAP Site - 13555

    Energy Technology Data Exchange (ETDEWEB)

    Moore, James [U.S. Army Corps of Engineers - New York District 26 Federal Plaza, New York, New York 10278 (United States); Hays, David [U.S. Army Corps of Engineers - Kansas City District 601 E. 12th Street, Kansas City, Missouri 64106 (United States); Quinn, John; Johnson, Robert; Durham, Lisa [Argonne National Laboratory, Environmental Science Division 9700 S. Cass Ave., Argonne, Illinois 60439 (United States)

    2013-07-01

    As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)

  20. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  1. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  2. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  3. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  4. Influence of Sampling Effort on the Estimated Richness of Road-Killed Vertebrate Wildlife

    Science.gov (United States)

    Bager, Alex; da Rosa, Clarissa A.

    2011-05-01

    Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.

  5. Model comparisons for estimating carbon emissions from North American wildland fire

    Science.gov (United States)

    Nancy H.F. French; William J. de Groot; Liza K. Jenkins; Brendan M. Rogers; Ernesto Alvarado; Brian Amiro; Bernardus De Jong; Scott Goetz; Elizabeth Hoy; Edward Hyer; Robert Keane; B.E. Law; Donald McKenzie; Steven G. McNulty; Roger Ottmar; Diego R. Perez-Salicrup; James Randerson; Kevin M. Robertson; Merritt. Turetsky

    2011-01-01

    Research activities focused on estimating the direct emissions of carbon from wildland fires across North America are reviewed as part of the North American Carbon Program disturbance synthesis. A comparison of methods to estimate the loss of carbon from the terrestrial biosphere to the atmosphere from wildland fires is presented. Published studies on emissions from...

  6. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    Science.gov (United States)

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Two-compartment, two-sample technique for accurate estimation of effective renal plasma flow: Theoretical development and comparison with other methods

    International Nuclear Information System (INIS)

    Lear, J.L.; Feyerabend, A.; Gregory, C.

    1989-01-01

    Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques

  8. Estimating rare events in biochemical systems using conditional sampling

    Science.gov (United States)

    Sundar, V. S.

    2017-01-01

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  9. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  10. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  11. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  12. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  13. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  14. A method for estimating radioactive cesium concentrations in cattle blood using urine samples.

    Science.gov (United States)

    Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji

    2017-12-01

    In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.

  15. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  16. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  17. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  18. WTA estimates using the method of paired comparison: tests of robustness

    Science.gov (United States)

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  19. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  20. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival.

    Directory of Open Access Journals (Sweden)

    Ziya Kordjazi

    Full Text Available Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day and four levels of the number of sampling-days (2, 4, 6 and 7 days. The most parsimonious Cormack-Jolly-Seber (CJS model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery.

  1. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival

    Science.gov (United States)

    Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas

    2016-01-01

    Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561

  2. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  3. Inverse sampled Bernoulli (ISB) procedure for estimating a population proportion, with nuclear material applications

    International Nuclear Information System (INIS)

    Wright, T.

    1982-01-01

    A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling

  4. Comparison between two sampling methods by results obtained using petrographic techniques, specially developed for minerals of the Itataia uranium phosphate deposit, Ceara, Brazil

    International Nuclear Information System (INIS)

    Salas, H.T.; Murta, R.L.L.

    1985-01-01

    The results of comparison of two sampling methods applied to a gallery of the uranium-phosphate ore body of Itataia-Ceara State, Brazil, along 235 metres of mineralized zone, are presented. The results were obtained through petrographic techniques especially developed and applied to both samplings. In the first one it was studied hand samples from a systematically sampling made at intervals of 2 metres. After that, the estimated mineralogical composition studies were carried out. Some petrogenetic observations were for the first time verified. The second sampling was made at intervals of 20 metres and 570 tons of ore extracted and distributed in sections and a sample representing each section was studied after crushing at -65. Their mineralogy were quantified and the degree of liberation of apatite calculated. Based on the mineralogical data obtained it was possible to represent both samplings and to make the comparison of the main mineralogical groups (phosphates, carbonates and silicates). In spite of utilizing different methods and methodology and the kind of mineralization, stockwork, being quite irregular, the results were satisfactory. (Author) [pt

  5. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  6. Turbidity-controlled suspended sediment sampling for runoff-event load estimation

    Science.gov (United States)

    Jack Lewis

    1996-01-01

    Abstract - For estimating suspended sediment concentration (SSC) in rivers, turbidity is generally a much better predictor than water discharge. Although it is now possible to collect continuous turbidity data even at remote sites, sediment sampling and load estimation are still conventionally based on discharge. With frequent calibration the relation of turbidity to...

  7. Performance of sampling methods to estimate log characteristics for wildlife.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  8. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2015-01-01

    Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.

  9. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  10. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  11. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  12. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  13. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  14. Comparison of Sun-Induced Chlorophyll Fluorescence Estimates Obtained from Four Portable Field Spectroradiometers

    Science.gov (United States)

    Julitta, Tommaso; Corp, Lawrence A.; Rossini, Micol; Burkart, Andreas; Cogliati, Sergio; Davies, Neville; Hom, Milton; Mac Arthur, Alasdair; Middleton, Elizabeth M.; Rascher, Uwe; hide

    2016-01-01

    Remote Sensing of Sun-Induced Chlorophyll Fluorescence (SIF) is a research field of growing interest because it offers the potential to quantify actual photosynthesis and to monitor plant status. New satellite missions from the European Space Agency, such as the Earth Explorer 8 FLuorescence EXplorer (FLEX) mission-scheduled to launch in 2022 and aiming at SIF mapping-and from the National Aeronautics and Space Administration (NASA) such as the Orbiting Carbon Observatory-2 (OCO-2) sampling mission launched in July 2014, provide the capability to estimate SIF from space. The detection of the SIF signal from airborne and satellite platform is difficult and reliable ground level data are needed for calibration/validation. Several commercially available spectroradiometers are currently used to retrieve SIF in the field. This study presents a comparison exercise for evaluating the capability of four spectroradiometers to retrieve SIF. The results show that an accurate far-red SIF estimation can be achieved using spectroradiometers with an ultrafine resolution (less than 1 nm), while the red SIF estimation requires even higher spectral resolution (less than 0.5 nm). Moreover, it is shown that the Signal to Noise Ratio (SNR) plays a significant role in the precision of the far-red SIF measurements.

  15. Environmental DNA method for estimating salamander distribution in headwater streams, and a comparison of water sampling methods.

    Science.gov (United States)

    Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi

    2017-01-01

    Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

  16. Estimation of potential evapotranspiration of a coastal savannah environment; comparison of methods

    International Nuclear Information System (INIS)

    Asare, D.K.; Ayeh, E.O.; Amenorpe, G.; Banini, G.K.

    2011-01-01

    Six potential evapotranspiration models namely, Penman-Monteith, Hargreaves-Samani , Priestley-Taylor, IRMAK1, IRMAK2 and TURC, were used to estimate daily PET values at Atomic-Kwabenya in the coastal savannah environment of Ghana for the year 2005. The study compared PET values generated by six models and identified which ones compared favourably with the Penman-Monteith model which is the recommended standard method for estimating PET. Cross comparison analysis showed that only the daily estimates of PET of Hargreaves-Samani model correlated reasonably (r = 0.82) with estimates by the Penman-Monteith model. Additionally, PET values by the Priestley-Taylor and TURC models were highly correlated (r = 0.99) as well as those generated by IRMAK2 and TURC models (r = 0.96). Statistical analysis, based on pair comparison of means, showed that daily PET estimates of the Penman-Monteith model were not different from the Priestley-Taylor model for the Kwabenya-Atomic area located in the coastal savannah environment of Ghana. The Priestley-Taylor model can be used, in place of the Penman-Monteith model, to estimate daily PET for the Atomic-Kwabenya area of the coastal savannah environment of Ghana. The Hargreaves-Samani model can also be used to estimate PET for the study area because its PET estimates correlated reasonably with those of the Penman-Monteith model (r = 0.82) and requires only air temperature measurements as inputs. (au)

  17. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  18. Estimates and sampling schemes for the instrumentation of accountability systems

    International Nuclear Information System (INIS)

    Jewell, W.S.; Kwiatkowski, J.W.

    1976-10-01

    The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered

  19. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  20. A metric for cross-sample comparisons using logit and probit

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    relative to an arbitrary scale, which makes the coefficients difficult both to interpret and to compare across groups or samples. Do differences in coefficients reflect true differences or differences in scales? This cross-sample comparison problem raises concerns for comparative research. However, we......* across groups or samples, making it suitable for situations met in real applications in comparative research. Our derivations also extend to the probit and to ordered and multinomial models. The new metric is implemented in the Stata command nlcorr....

  1. Comparison of POCIS passive samplers vs. composite water sampling: A case study.

    Science.gov (United States)

    Criquet, Justine; Dumoulin, David; Howsam, Michael; Mondamert, Leslie; Goossens, Jean-François; Prygiel, Jean; Billon, Gabriel

    2017-12-31

    The relevance of Polar Organic Chemical Integrative Samplers (POCIS) was evaluated for the assessment of concentrations of 46 pesticides and 19 pharmaceuticals in a small, peri-urban river with multi-origin inputs. Throughout the period of POCIS deployment, 24h-average water samples were collected automatically, and showed the rapid temporal evolution of concentrations of several micropollutants, as well as permitting the calculation of average concentrations in the water phase for comparison with those estimated from POCIS passive samplers. In the daily water samples, cyproconazol, epoxyconazol and imidacloprid showed high temporal variations with concentrations ranging from under the limit of detection up to several hundreds of ngL -1 . Erythromycin, cyprofloxacin and iopromide also increased rapidly up to tens of ngL -1 within a few days. Conversely, atrazine, caffeine, diclofenac, and to a lesser extent carbamazepine and sucralose, were systematically present in the water samples and showed limited variation in concentrations. For most of the substances studied here, the passive samplers gave reliable average concentrations between the minimal and maximal daily concentrations during the time of deployment. For pesticides, a relatively good correlation was clearly established (R 2 =0.89) between the concentrations obtained by POCIS and those gained from average water samples. A slight underestimation of the concentration by POCIS can be attributed to inappropriate sampling rates extracted from the literature and for our system, and new values are proposed. Considering the all data set, 75% of the results indicate a relatively good agreement between the POCIS and the average water samples concentration (values of the ratio ranging between 0,33 and 3). Note further that this agreement between these concentrations remains valid considering different sampling rates extracted from the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Zinc estimates in ore and slag samples and analysis of ash in coal samples

    International Nuclear Information System (INIS)

    Umamaheswara Rao, K.; Narayana, D.G.S.; Subrahmanyam, Y.

    1984-01-01

    Zinc estimates in ore and slag samples were made using the radioisotope X-ray fluorescence method. A 10 mCi 238 Pu was employed as the primary source of radiation and a thin crystal NaI(Ti) spectrometer was used to accomplish the detection of the 8.64 keV Zinc K-characteristic X-ray line. The results are reported. Ash content of coal concerning about 100 samples from Ravindra Khani VI and VII mines in Andhra Pradesh were measured using X-ray backscattering method with compensation for varying concentrations of iron in different coal samples through iron-X-ray fluorescent intensity measurements. The ash percent is found to range from 10 to 40. (author)

  3. Oceanic uptake of CO2 re-estimated through δ13C in WOCE samples

    International Nuclear Information System (INIS)

    Lerperger, Michael; McNichol, A.P.; Peden, J.; Gagnon, A.R.; Elder, K.L.; Kutschera, W.; Rom, W.; Steier, P.

    2000-01-01

    In addition to 14 C, a large set of δ 13 C data was produced at NOSAMS as part of the World ocean circulation experiment (WOCE). In this paper, a subset of 973 δ 13 C results from 63 stations in the Pacific Ocean was compared to a total number of 219 corresponding results from 12 stations sampled during oceanographic programs in the early 1970s. The data were analyzed in light of recent work to estimate the uptake of CO 2 derived from fossil fuel and biomass burning in the oceans by quantifying the δ 13 C Suess effect in the oceans. In principle, the δ 13 C value of dissolved inorganic carbon (DIC) allows a quantitative estimate of how much of the anthropogenic CO 2 released into the atmosphere is taken up by the oceans, because the δ 13 C of CO 2 derived from organic matter (∼2.7 percent) is significantly different from that of the atmosphere (∼0.8 percent). Our new analysis indicates an apparent discrepancy between the old and the new data sets, possibly caused by a constant offset in δ 13 C values in a subset of the data. A similar offset was reported in an earlier work by Paul Quay et al. for one station that was not included in their final analysis. We present an estimate for this assumed offset based on data from water depths below which little or no change in δ 13 C over time would be expected. Such a correction leads to a significantly reduced estimate of the CO 2 uptake, possibly as low as one half of the amount of 2.1 GtC yr -1 (gigatons carbon per year) estimated previously. The present conclusion is based on a comparison with a relatively small data set from the 70s in the Pacific Ocean. The larger data set collected during the GEOSECS program was not used because of problems reported with the data. This work suggests there may also be problems in comparing non-GEOSECS data from the 1970s to the current data. The calculation of significantly lower uptake estimates based on an offset-related problem appears valid, but the exact figures are

  4. Inter comparison of 90Sr and 137Cs contents in biologic samples and natural U in soil samples

    International Nuclear Information System (INIS)

    Liu Jianfen; Zeng Guangjian; Lu Xuequan

    2001-01-01

    The results of the 90 Sr and 137 Cs contents in biologic samples and the natural U in soil samples obtained in a joint effort by fourteen environmental radiation laboratories in the Chinese environmental protection system were analyzed and compared. Two kinds of biologic samples and one kind of soil samples were used for inter comparison. Of which, one kind of biologic samples (biologic powder samples) and the soil samples came from the IAEA samples were environmental and the reference values were known. The another kind of biologic samples were environmental tea-leaf that were taken from a tea garden near Hangzhou. The mean values obtained by all the joined laboratories was used as the reference. The inter comparison results were expressed in terms of the deviation from the reference value. It was found that the deviation of the 90 Sr and 137 Cs contents of biologic powder samples ranged from -15.4% to 26.5% and -15.0% to 0.4%, respectively. The deviation of the natural U content ranged from -25.5% to 7.3% for the soil samples. For the tea-leaf, the 90 Sr deviation was -22.7% to 19.1%, and the 137 Cs data had a relative large scatter with a ratio of the maximum and the minimum values being about 7. It was pointed out that the analysis results offered by different laboratories might have involved system errors

  5. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  6. Estimation of the specific activity of radioiodinated gonadotrophins: comparison of three methods

    Energy Technology Data Exchange (ETDEWEB)

    Englebienne, P [Centre for Research and Diagnosis in Endocrinology, Kain (Belgium); Slegers, G [Akademisch Ziekenhuis, Ghent (Belgium). Lab. voor Analytische Chemie

    1983-01-14

    The authors compared 3 methods for estimating the specific activity of radioiodinated gonadotrophins. Two of the methods (column recovery and isotopic dilution) gave similar results, while the third (autodisplacement) gave significantly higher estimations. In the autodisplacement method, B/T ratios, obtained when either labelled hormone alone, or labelled and unlabelled hormone, are added to the antibody, were compared as estimates of the mass of hormone iodinated. It is likely that immunologically unreactive impurities present in the labelled hormone solution invalidate such comparison.

  7. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  8. Infusion and sampling site effects on two-pool model estimates of leucine metabolism

    International Nuclear Information System (INIS)

    Helland, S.J.; Grisdale-Helland, B.; Nissen, S.

    1988-01-01

    To assess the effect of site of isotope infusion on estimates of leucine metabolism infusions of alpha-[4,5-3H]ketoisocaproate (KIC) and [U- 14 C]leucine were made into the left or right ventricles of sheep and pigs. Blood was sampled from the opposite ventricle. In both species, left ventricular infusions resulted in significantly lower specific radioactivities (SA) of [ 14 C]leucine and [ 3 H]KIC. [ 14 C]KIC SA was found to be insensitive to infusion and sampling sites. [ 14 C]KIC was in addition found to be equal to the SA of [ 14 C]leucine only during the left heart infusions. Therefore, [ 14 C]KIC SA was used as the only estimate for [ 14 C]SA in the equations for the two-pool model. This model eliminated the influence of site of infusion and blood sampling on the estimates for leucine entry and reduced the impact on the estimates for proteolysis and oxidation. This two-pool model could not compensate for the underestimation of transamination reactions occurring during the traditional venous isotope infusion and arterial blood sampling

  9. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup

    2017-11-15

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)

  10. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    Science.gov (United States)

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  11. The impact of fecal sample processing on prevalence estimates for antibiotic-resistant Escherichia coli.

    Science.gov (United States)

    Omulo, Sylvia; Lofgren, Eric T; Mugoh, Maina; Alando, Moshe; Obiya, Joshua; Kipyegon, Korir; Kikwai, Gilbert; Gumbi, Wilson; Kariuki, Samuel; Call, Douglas R

    2017-05-01

    Investigators often rely on studies of Escherichia coli to characterize the burden of antibiotic resistance in a clinical or community setting. To determine if prevalence estimates for antibiotic resistance are sensitive to sample handling and interpretive criteria, we collected presumptive E. coli isolates (24 or 95 per stool sample) from a community in an urban informal settlement in Kenya. Isolates were tested for susceptibility to nine antibiotics using agar breakpoint assays and results were analyzed using generalized linear mixed models. We observed a 0.1). Prevalence estimates did not differ for five distinct E. coli colony morphologies on MacConkey agar plates (P>0.2). Successive re-plating of samples for up to five consecutive days had little to no impact on prevalence estimates. Finally, culturing E. coli under different conditions (with 5% CO 2 or micro-aerobic) did not affect estimates of prevalence. For the conditions tested in these experiments, minor modifications in sample processing protocols are unlikely to bias estimates of the prevalence of antibiotic-resistance for fecal E. coli. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  13. Estimates of laboratory accuracy and precision on Hanford waste tank samples

    International Nuclear Information System (INIS)

    Dodd, D.A.

    1995-01-01

    A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources

  14. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    Science.gov (United States)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  15. Estimation of tritium activity in bioassay samples having chemiluminescence

    International Nuclear Information System (INIS)

    Dwivedi, R.K.; Manu, Kumar; Kumar, Vinay; Soni, Ashish; Kaushik, A.K.; Tiwari, S.K.; Gupta, Ashok

    2008-01-01

    Tritium is recognized as major internal dose contributor in PHWR type of reactors. Estimation of internal dose due to tritium is carried out by analyzing urine samples in liquid scintillation analyzer (LSA). Presence of residual biochemical species in urine samples of some individuals under medical administration shows significant amount of chemiluminescence. If appropriate care is not taken the results obtained by liquid scintillation counter may be mistaken as genuine uptake of tritium. The distillation method described in this paper is used at RAPS-3 and 4 to assess correct tritium uptake. (author)

  16. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  17. Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison.

    Directory of Open Access Journals (Sweden)

    Minsu Ock

    Full Text Available We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein 'full health' and 'being dead' were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study. With regard to the valuation methods, paired comparison, visual analogue scale (VAS, and standard gamble (SG were used in the household survey, whereas paired comparison and population health equivalence (PHE were used in the web-based survey. Accordingly, we described a total of 258 health states, with 'full health' and 'being dead' designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-values<0.001. The discrimination of values according to health state severity was most suitable in Model 1. Based on these results, the paired comparison-only model was selected as the best model for estimating disability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with 'full health' and 'being dead' as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights.

  18. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  19. Comparison of methods used for estimating pharmacist counseling behaviors.

    Science.gov (United States)

    Schommer, J C; Sullivan, D L; Wiederholt, J B

    1994-01-01

    To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.

  20. Reliability estimation system: its application to the nuclear geophysical sampling of ore deposits

    International Nuclear Information System (INIS)

    Khaykovich, I.M.; Savosin, S.I.

    1992-01-01

    The reliability estimation system accepted in the Soviet Union for sampling data in nuclear geophysics is based on unique requirements in metrology and methodology. It involves estimating characteristic errors in calibration, as well as errors in measurement and interpretation. This paper describes the methods of estimating the levels of systematic and random errors at each stage of the problem. The data of nuclear geophysics sampling are considered to be reliable if there are no statistically significant, systematic differences between ore intervals determined by this method and by geological control, or by other methods of sampling; the reliability of the latter having been verified. The difference between the random errors is statistically insignificant. The system allows one to obtain information on the parameters of ore intervals with a guaranteed random error and without systematic errors. (Author)

  1. Bias in estimating the cross-sectional smoking, alcohol, obesity and diabetes associations with moderate-severe periodontitis in the Atherosclerosis Risk in Communities study: comparison of full versus partial-mouth estimates.

    Science.gov (United States)

    Akinkugbe, Aderonke A; Saraiya, Veeral M; Preisser, John S; Offenbacher, Steven; Beck, James D

    2015-07-01

    To assess whether partial-mouth protocols (PRPs) result in biased estimates of the associations between smoking, alcohol, obesity and diabetes with periodontitis. Using a sample (n = 6129) of the 1996-1998 Atherosclerosis Risk in Communities study, we used measures of probing pocket depth and clinical attachment level to identify moderate-severe periodontitis. Adjusting for confounders, unconditional binary logistic regression estimated prevalence odds ratios (POR) and 95% confidence limits. Specifically, we compared POR for smoking, alcohol, obesity and diabetes with periodontitis derived from full-mouth to those derived from 4-PRPs (Ramfjörd, National Health and Nutrition Examination survey-III, modified-NHANES-IV and 42-site-Random-site selection-method). Finally, we conducted a simple sensitivity analysis of periodontitis misclassification by changing the case definition threshold for each PRP. In comparison to full-mouth PORs, PRP PORs were biased in terms of magnitude and direction. Holding the full-mouth case definition at moderate-severe periodontitis and setting it at mild-moderate-severe for the PRPs did not consistently produce POR estimates that were either biased towards or away from the null in comparison to full-mouth estimates. Partial-mouth protocols result in misclassification of periodontitis and may bias epidemiologic measures of association. The magnitude and direction of this bias depends on choice of PRP and case definition threshold used. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Estimating time to pregnancy from current durations in a cross-sectional sample

    DEFF Research Database (Denmark)

    Keiding, Niels; Kvist, Kajsa; Hartvig, Helle

    2002-01-01

    A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....

  3. Is a 'convenience' sample useful for estimating immunization coverage in a small population?

    Science.gov (United States)

    Weir, Jean E; Jones, Carrie

    2008-01-01

    Rapid survey methodologies are widely used for assessing immunization coverage in developing countries, approximating true stratified random sampling. Non-random ('convenience') sampling is not considered appropriate for estimating immunization coverage rates but has the advantages of low cost and expediency. We assessed the validity of a convenience sample of children presenting to a travelling clinic by comparing the coverage rate in the convenience sample to the true coverage established by surveying each child in three villages in rural Papua New Guinea. The rate of DTF immunization coverage as estimated by the convenience sample was within 10% of the true coverage when the proportion of children in the sample was two-thirds or when only children over the age of one year were counted, but differed by 11% when the sample included only 53% of the children and when all eligible children were included. The convenience sample may be sufficiently accurate for reporting purposes and is useful for identifying areas of low coverage.

  4. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  5. Bayesian Estimation of Fish Disease Prevalence from Pooled Samples Incorporating Sensitivity and Specificity

    Science.gov (United States)

    Williams, Christopher J.; Moffitt, Christine M.

    2003-03-01

    An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.

  6. Porosity estimation by semi-supervised learning with sparsely available labeled samples

    Science.gov (United States)

    Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi

    2017-09-01

    This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.

  7. A comparison of estimated and calculated effective porosity

    Science.gov (United States)

    Stephens, Daniel B.; Hsu, Kuo-Chin; Prieksat, Mark A.; Ankeny, Mark D.; Blandford, Neil; Roth, Tracy L.; Kelsey, James A.; Whitworth, Julia R.

    Effective porosity in solute-transport analyses is usually estimated rather than calculated from tracer tests in the field or laboratory. Calculated values of effective porosity in the laboratory on three different textured samples were compared to estimates derived from particle-size distributions and soil-water characteristic curves. The agreement was poor and it seems that no clear relationships exist between effective porosity calculated from laboratory tracer tests and effective porosity estimated from particle-size distributions and soil-water characteristic curves. A field tracer test in a sand-and-gravel aquifer produced a calculated effective porosity of approximately 0.17. By comparison, estimates of effective porosity from textural data, moisture retention, and published values were approximately 50-90% greater than the field calibrated value. Thus, estimation of effective porosity for chemical transport is highly dependent on the chosen transport model and is best obtained by laboratory or field tracer tests. Résumé La porosité effective dans les analyses de transport de soluté est habituellement estimée, plutôt que calculée à partir d'expériences de traçage sur le terrain ou au laboratoire. Les valeurs calculées de la porosité effective au laboratoire sur trois échantillons de textures différentes ont été comparées aux estimations provenant de distributions de taille de particules et de courbes caractéristiques sol-eau. La concordance était plutôt faible et il semble qu'il n'existe aucune relation claire entre la porosité effective calculée à partir des expériences de traçage au laboratoire et la porosité effective estimée à partir des distributions de taille de particules et de courbes caractéristiques sol-eau. Une expérience de traçage de terrain dans un aquifère de sables et de graviers a fourni une porosité effective calculée d'environ 0,17. En comparaison, les estimations de porosité effective de données de

  8. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  9. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  10. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  11. Limited sampling hampers "big data" estimation of species richness in a tropical biodiversity hotspot.

    Science.gov (United States)

    Engemann, Kristine; Enquist, Brian J; Sandel, Brody; Boyle, Brad; Jørgensen, Peter M; Morueta-Holme, Naia; Peet, Robert K; Violle, Cyrille; Svenning, Jens-Christian

    2015-02-01

    Macro-scale species richness studies often use museum specimens as their main source of information. However, such datasets are often strongly biased due to variation in sampling effort in space and time. These biases may strongly affect diversity estimates and may, thereby, obstruct solid inference on the underlying diversity drivers, as well as mislead conservation prioritization. In recent years, this has resulted in an increased focus on developing methods to correct for sampling bias. In this study, we use sample-size-correcting methods to examine patterns of tropical plant diversity in Ecuador, one of the most species-rich and climatically heterogeneous biodiversity hotspots. Species richness estimates were calculated based on 205,735 georeferenced specimens of 15,788 species using the Margalef diversity index, the Chao estimator, the second-order Jackknife and Bootstrapping resampling methods, and Hill numbers and rarefaction. Species richness was heavily correlated with sampling effort, and only rarefaction was able to remove this effect, and we recommend this method for estimation of species richness with "big data" collections.

  12. European Measurement Comparisons of Environmental Radioactivity

    International Nuclear Information System (INIS)

    Waetjen, Uwe

    2008-01-01

    The scheme of European measurement comparisons to verify radioactivity monitoring in the European Union is briefly explained. After a review of comparisons conducted during the years 1990, the approach of IRMM organising these comparisons since 2003 is presented. IRMM is providing comparison samples with a reference value traceable to the SI units and which is fully documented to all participants and national authorities after completion of the comparison. The sample preparation and determination of traceable reference values at IRMM, the sample treatment and measurement in the participating laboratories, as well as the evaluation of comparison results are described in some detail using the example of an air filter comparison. The results of a comparison to determine metabolised 40 K, 90 Sr and 137 Cs in milk powder are presented as well. The necessary improvements in the estimation of measurement uncertainty by the participating laboratories are discussed. The performance of individual laboratories which have participated in at least four comparison exercises over the years is studied in terms of observable trends

  13. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  14. The effects of parameter estimation on minimizing the in-control average sample size for the double sampling X bar chart

    Directory of Open Access Journals (Sweden)

    Michael B.C. Khoo

    2013-11-01

    Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.

  15. Peer influence on students' estimates of performance : social comparison in clinical rotations

    NARCIS (Netherlands)

    Raat, A. N. (Janet); Kuks, Jan B. M.; van Hell, E. Ally; Cohen-Schotanus, Janke

    Context During clinical rotations, students move from one clinical situation to another. Questions exist about students strategies for coping with these transitions. These strategies may include a process of social comparison because in this context it offers the student an opportunity to estimate

  16. Effects of sampling conditions on DNA-based estimates of American black bear abundance

    Science.gov (United States)

    Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability

  17. Comparison of sampling methodologies and estimation of population parameters for a temporary fish ectoparasite

    Directory of Open Access Journals (Sweden)

    J.M. Artim

    2016-08-01

    Full Text Available Characterizing spatio-temporal variation in the density of organisms in a community is a crucial part of ecological study. However, doing so for small, motile, cryptic species presents multiple challenges, especially where multiple life history stages are involved. Gnathiid isopods are ecologically important marine ectoparasites, micropredators that live in substrate for most of their lives, emerging only once during each juvenile stage to feed on fish blood. Many gnathiid species are nocturnal and most have distinct substrate preferences. Studies of gnathiid use of habitat, exploitation of hosts, and population dynamics have used various trap designs to estimate rates of gnathiid emergence, study sensory ecology, and identify host susceptibility. In the studies reported here, we compare and contrast the performance of emergence, fish-baited and light trap designs, outline the key features of these traps, and determine some life cycle parameters derived from trap counts for the Eastern Caribbean coral-reef gnathiid, Gnathia marleyi. We also used counts from large emergence traps and light traps to estimate additional life cycle parameters, emergence rates, and total gnathiid density on substrate, and to calibrate the light trap design to provide estimates of rate of emergence and total gnathiid density in habitat not amenable to emergence trap deployment.

  18. COMPARISON OF AGE ESTIMATES FROM VARIOUS HARD PARTS FOR REDFIN PERCH, Percafluviatilis, IN TASMANIA

    Directory of Open Access Journals (Sweden)

    Irwan Jatmiko

    2013-06-01

    Full Text Available Whole otoliths, sectioned otoliths, scales and vertebrae were used to select the most suitable for age determination of redfin perch, Percafluviatilis. Redfin perch were sampled from Trevallyn Lake and Brushy Lagoon using fyke nets, gillnets, electrofishing and rod and line angling. Age estimates were assessed for comparison between readings and among structures. One-wayANOVA of readability scores highlighted that sectioned otolith was the most obvious compare to other hard parts. Sectioned otoliths also showed the highest (93.9% agreement between readings, followed by vertebrae (68.7, scales (38.8 and whole otoliths (29.9. Furthermore, there were no significantly different (p>0.05 between first and second readings from sectioned otolith and vertebrae but significantly different (p <0.05 to those from scales and whole otoliths. When sectioned otoliths’ ages were compared with other structures, vertebrae showed the highest (47.6% agreement to those followed by scales (25.2% and whole otoliths (20.4%. Age estimates from sectioned otoliths were significantly different (p<0.05 to the values obtained from vertebrae, scales and whole otoliths.This finding demonstrated that sectioned otoliths are the best hard part for age determination for redfin perch in Tasmania.

  19. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  20. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  1. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.

    Science.gov (United States)

    Dołęgowska, Sabina

    2016-11-01

    In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.

  3. Computer simulation comparison of tripolar, bipolar, and spline Laplacian electrocadiogram estimators.

    Science.gov (United States)

    Chen, T; Besio, W; Dai, W

    2009-01-01

    A comparison of the performance of the tripolar and bipolar concentric as well as spline Laplacian electrocardiograms (LECGs) and body surface Laplacian mappings (BSLMs) for localizing and imaging the cardiac electrical activation has been investigated based on computer simulation. In the simulation a simplified eccentric heart-torso sphere-cylinder homogeneous volume conductor model were developed. Multiple dipoles with different orientations were used to simulate the underlying cardiac electrical activities. Results show that the tripolar concentric ring electrodes produce the most accurate LECG and BSLM estimation among the three estimators with the best performance in spatial resolution.

  4. A comparison of small-area estimation techniques to estimate selected stand attributes using LiDAR-derived auxiliary variables

    Science.gov (United States)

    Michael E. Goerndt; Vicente J. Monleon; Hailemariam. Temesgen

    2011-01-01

    One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...

  5. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  6. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  7. Critical length sampling: a method to estimate the volume of downed coarse woody debris

    Science.gov (United States)

    G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey

    2010-01-01

    In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...

  8. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    Science.gov (United States)

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  9. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  10. The use of importance sampling in a trial assessment to obtain converged estimates of radiological risk

    International Nuclear Information System (INIS)

    Johnson, K.; Lucas, R.

    1986-12-01

    In developing a methodology for assessing potential sites for the disposal of radioactive wastes, the Department of the Environment has conducted a series of trial assessment exercises. In order to produce converged estimates of radiological risk using the SYVAC A/C simulation system an efficient sampling procedure is required. Previous work has demonstrated that importance sampling can substantially increase sampling efficiency. This study used importance sampling to produce converged estimates of risk for the first DoE trial assessment. Four major nuclide chains were analysed. In each case importance sampling produced converged risk estimates with between 10 and 170 times fewer runs of the SYVAC A/C model. This increase in sampling efficiency can reduce the total elapsed time required to obtain a converged estimate of risk from one nuclide chain by a factor of 20. The results of this study suggests that the use of importance sampling could reduce the elapsed time required to perform a risk assessment of a potential site by a factor of ten. (author)

  11. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  12. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  13. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    Science.gov (United States)

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  14. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters.

    Science.gov (United States)

    Xu, Huijun; Gordon, J James; Siebers, Jeffrey V

    2011-02-01

    A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The

  15. Bridging the gaps between non-invasive genetic sampling and population parameter estimation

    Science.gov (United States)

    Francesca Marucco; Luigi Boitani; Daniel H. Pletscher; Michael K. Schwartz

    2011-01-01

    Reliable estimates of population parameters are necessary for effective management and conservation actions. The use of genetic data for capture­recapture (CR) analyses has become an important tool to estimate population parameters for elusive species. Strong emphasis has been placed on the genetic analysis of non-invasive samples, or on the CR analysis; however,...

  16. Estimating an appropriate sampling frequency for monitoring ground water well contamination

    International Nuclear Information System (INIS)

    Tuckfield, R.C.

    1994-01-01

    Nearly 1,500 ground water wells at the Savannah River Site (SRS) are sampled quarterly to monitor contamination by radionuclides and other hazardous constituents from nearby waste sites. Some 10,000 water samples were collected in 1993 at a laboratory analysis cost of $10,000,000. No widely accepted statistical method has been developed, to date, for estimating a technically defensible ground water sampling frequency consistent and compliant with federal regulations. Such a method is presented here based on the concept of statistical independence among successively measured contaminant concentrations in time

  17. Dried blood spot measurement: application in tacrolimus monitoring using limited sampling strategy and abbreviated AUC estimation.

    Science.gov (United States)

    Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo

    2008-02-01

    Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.

  18. Estimating and comparing microbial diversity in the presence of sequencing errors

    Science.gov (United States)

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This

  19. Baysian estimation of P(X > x) from a small sample of Gaussian data

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2017-01-01

    The classical statistical uncertainty problem of estimation of upper tail probabilities on the basis of a small sample of observations of a Gaussian random variable is considered. Predictive posterior estimation is discussed, adopting the standard statistical model with diffuse priors of the two...

  20. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  1. Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison.

    Science.gov (United States)

    Ock, Minsu; Ahn, Jeonghoon; Yoon, Seok-Jun; Jo, Min-Woo

    2016-01-01

    We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein 'full health' and 'being dead' were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study). With regard to the valuation methods, paired comparison, visual analogue scale (VAS), and standard gamble (SG) were used in the household survey, whereas paired comparison and population health equivalence (PHE) were used in the web-based survey. Accordingly, we described a total of 258 health states, with 'full health' and 'being dead' designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-valuesdisability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with 'full health' and 'being dead' as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights.

  2. Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison

    Science.gov (United States)

    Ock, Minsu; Ahn, Jeonghoon; Yoon, Seok-Jun; Jo, Min-Woo

    2016-01-01

    We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein ‘full health’ and ‘being dead’ were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study). With regard to the valuation methods, paired comparison, visual analogue scale (VAS), and standard gamble (SG) were used in the household survey, whereas paired comparison and population health equivalence (PHE) were used in the web-based survey. Accordingly, we described a total of 258 health states, with ‘full health’ and ‘being dead’ designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-valuesdisability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with ‘full health’ and ‘being dead’ as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights. PMID:27606626

  3. A Convenient Method for Estimation of the Isotopic Abundance in Uranium Bearing Samples

    International Nuclear Information System (INIS)

    AI -Saleh, F.S.; AI-Mukren, Alj.H.; Farouk, M.A.

    2008-01-01

    A convenient and simple method for estimation of the isotopic abundance in some uranium bearing samples using gamma-ray spectrometry is developed using a hyper pure germanium spectrometer and a standard uranium sample with known isotopic abundance

  4. A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.

    Science.gov (United States)

    Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R

    2017-07-01

    The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  6. Empirical insights and considerations for the OBT inter-laboratory comparison of environmental samples

    International Nuclear Information System (INIS)

    Kim, Sang-Bog; Roche, Jennifer

    2013-01-01

    Organically bound tritium (OBT) is an important tritium species that can be measured in most environmental samples, but has only recently been recognized as a species of tritium in these samples. Currently, OBT is not routinely measured by environmental monitoring laboratories around the world. There are no certified reference materials (CRMs) for environmental samples. Thus, quality assurance (QA), or verification of the accuracy of the OBT measurement, is not possible. Alternatively, quality control (QC), or verification of the precision of the OBT measurement, can be achieved. In the past, there have been differences in OBT analysis results between environmental laboratories. A possible reason for the discrepancies may be differences in analytical methods. Therefore, inter-laboratory OBT comparisons among the environmental laboratories are important and would provide a good opportunity for adopting a reference OBT analytical procedure. Due to the analytical issues, only limited information is available on OBT measurement. Previously conducted OBT inter-laboratory practices are reviewed and the findings are described. Based on our experiences, a few considerations were suggested for the international OBT inter-laboratory comparison exercise to be completed in the near future. -- Highlights: ► Inter-laboratory OBT comparisons would provide a good opportunity for developing reference OBT analytical procedures. ► The measurement of environmental OBT concentrations has a higher associated uncertainty. ► Certified reference materials for OBT in environmental samples are required

  7. Comparisons between in vivo estimates of systemic Pu deposition and autopsy data

    International Nuclear Information System (INIS)

    Schofield, G.B.

    1982-01-01

    In the UK the radiochemical analyses of autopsy specimens have been undertaken following the death of 30 employees during the period 1964 - 1980 whose work has at some time brought them into contact with plutonium. These workers were routinely monitored during their lifetime and estimates made of their total body content of plutonium. Past experience has shown that the urinary plutonium content bears a marked relationship to the bone and liver deposition levels of plutonium but not to the quantities found in the lungs (1). In this paper therefore the comparisons are made between in vivo estimates of bone and liver plutonium deposition and estimates derived from both the wet and ash weights of autopsy specimens. (author)

  8. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    Science.gov (United States)

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  9. Estimation of uranium isotope in urine samples using extraction chromatography resin

    International Nuclear Information System (INIS)

    Thakur, Smita S.; Yadav, J.R.; Rao, D.D.

    2012-01-01

    Internal exposure monitoring for alpha emitting radionuclides is carried out by bioassay samples analysis. For occupational radiation workers handling uranium in reprocessing or fuel fabrication facilities, there exists a possibility of internal exposure and urine assay is the preferred method for monitoring such exposure. Estimation of lower concentration of uranium at mBq level by alpha spectrometry requires preconcentration and its separation from large volume of urine sample. For this purpose, urine samples collected from non radiation workers were spiked with 232 U tracer at mBq level to estimate the chemical yield. Uranium in urine sample was pre-concentrated by calcium phosphate coprecipitation and separated by extraction chromatography resin U/TEVA. In this resin extractant was DAAP (Diamylamylphosphonate) supported on inert Amberlite XAD-7 support material. After co-precipitation, precipitate was centrifuged and dissolved in 10 ml of 1M Al(NO 3 ) 3 prepared in 3M HNO 3 . The sample thus prepared was loaded on extraction chromatography resin, pre-conditioned with 10 ml of 3M HNO 3 . Column was washed with 10 ml of 3M HNO 3 . Column was again rinsed with 5 ml of 9M HCl followed by 20 ml of 0.05 M oxalic acid prepared in 5M HCl to remove interference due to Th and Np if present in the sample. Uranium was eluted from U/TEVA column with 15 ml of 0.01M HCl. The eluted uranium fraction was electrodeposited on stainless steel planchet and counted by alpha spectrometry for 360000 sec. Approximate analysis time involved from sample loading to stripping is 2 hours when compared with the time involved of 3.5 hours by conventional ion exchange method. Seven urine samples from non radiation worker were radio chemically analyzed by this technique and the radiochemical yield was found in the range of 69-91 %. Efficacy of this method against conventional anion exchange technique earlier standardized at this laboratory is also being highlighted. Minimum detectable activity

  10. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  11. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Science.gov (United States)

    Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry

    2011-10-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  12. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Directory of Open Access Journals (Sweden)

    Hyun Joo

    2011-10-01

    Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å, this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  13. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    Science.gov (United States)

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638

  14. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  15. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  16. Comparison of Pattern Recognition, Artificial Neural Network and Pedotransfer Functions for Estimation of Soil Water Parameters

    Directory of Open Access Journals (Sweden)

    Amir LAKZIAN

    2010-09-01

    Full Text Available This paper presents the comparison of three different approaches to estimate soil water content at defined values of soil water potential based on selected parameters of soil solid phase. Forty different sampling locations in northeast of Iran were selected and undisturbed samples were taken to measure the water content at field capacity (FC, -33 kPa, and permanent wilting point (PWP, -1500 kPa. At each location solid particle of each sample including the percentage of sand, silt and clay were measured. Organic carbon percentage and soil texture were also determined for each soil sample at each location. Three different techniques including pattern recognition approach (k nearest neighbour, k-NN, Artificial Neural Network (ANN and pedotransfer functions (PTF were used to predict the soil water at each sampling location. Mean square deviation (MSD and its components, index of agreement (d, root mean square difference (RMSD and normalized RMSD (RMSDr were used to evaluate the performance of all the three approaches. Our results showed that k-NN and PTF performed better than ANN in prediction of water content at both FC and PWP matric potential. Various statistics criteria for simulation performance also indicated that between kNN and PTF, the former, predicted water content at PWP more accurate than PTF, however both approach showed a similar accuracy to predict water content at FC.

  17. Comparison of Endotoxin Exposure Assessment by Bioaerosol Impinger and Filter-Sampling Methods

    OpenAIRE

    Duchaine, Caroline; Thorne, Peter S.; Mériaux, Anne; Grimard, Yan; Whitten, Paul; Cormier, Yvon

    2001-01-01

    Environmental assessment data collected in two prior occupational hygiene studies of swine barns and sawmills allowed the comparison of concurrent, triplicate, side-by-side endotoxin measurements using air sampling filters and bioaerosol impingers. Endotoxin concentrations in impinger solutions and filter eluates were assayed using the Limulus amebocyte lysate assay. In sawmills, impinger sampling yielded significantly higher endotoxin concentration measurements and lower variances than filte...

  18. Evaluation of design flood estimates with respect to sample size

    Science.gov (United States)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  19. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A comparison of point counts with a new acoustic sampling method ...

    African Journals Online (AJOL)

    We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc laboratory listening to acoustic samples are very similar, especially for a distance limited up to 50 m. Species that were frequently missed during both point counts and listening to acoustic samples ...

  1. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  2. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  3. Evaporation estimation of rift valley lakes: comparison of models.

    Science.gov (United States)

    Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe

    2009-01-01

    Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  4. Evaporation Estimation of Rift Valley Lakes: Comparison of Models

    Directory of Open Access Journals (Sweden)

    Tibebe Dessalegne

    2009-12-01

    Full Text Available Evapotranspiration (ET accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  5. Comparison of the COMRADEX-IV and AIRDOS-EPA methodologies for estimating the radiation dose to man from radionuclide releases to the atmosphere

    International Nuclear Information System (INIS)

    Miller, C.W.; Hoffman, F.O.; Dunning, D.E. Jr.

    1981-01-01

    This report presents a comparison between two computerized methodologies for estimating the radiation dose to man from radionuclide releases to the atmosphere. The COMRADEX-IV code was designed to provide a means of assessing potential radiological consequences from postulated power reactor accidents. The AIRDOS-EPA code was developed primarily to assess routine radionuclide releases from nuclear facilities. Although a number of different calculations are performed by these codes, three calculations are in common - atmospheric dispersion, estimation of internal dose from inhalation, and estimation of external dose from immersion in air containing gamma emitting radionuclides. The models used in these calculations were examined and found, in general, to be the same. Most differences in the doses calculated by the two codes are due to differences in values chosen for input parameters and not due to model differences. A sample problem is presented for illustration

  6. Estimation of uranium in different types of water and sand samples by adsorptive stripping voltammetry

    International Nuclear Information System (INIS)

    Bhalke, Sunil; Raghunath, Radha; Mishra, Suchismita; Suseela, B.; Tripathi, R.M.; Pandit, G.G.; Shukla, V.K.; Puranik, V.D.

    2005-01-01

    A method is standardized for the estimation of uranium by adsorptive stripping voltammetry using chloranilic acid (CAA) as complexing agent. The optimum parameters to get best sensitivity and good reproducibility for uranium were 60s adsorption time, pH 1.8, chloranilic acid (2x10 -4 M) and 0.002M EDTA. The peak potential under this condition was found to be -0.03 V. With these optimum parameters a sensitivity of 1.19 nA/nM uranium was observed. Detection limit for this optimum parameter was found to be 0.55 nM. This can be further improved by increasing adsorption time. Using this method, uranium was estimated in different type of water samples such as seawater, synthetic seawater, stream water, tap water, well water, bore well water and process water. This method has also been used for estimation of uranium in sand, organic solvent used for extraction of uranium from phosphoric acid and its raffinate. Sample digestion procedures used for estimation of uranium in various matrices are discussed. It has been observed from the analysis that the uranium peak potentials changes with matrix of the sample, hence, standard addition method is the best method to get reliable and accurate results. Quality assurance of the standardized method is verified by analyzing certified reference water sample from USDOE, participating intercomparison exercises and also by estimating uranium content in water samples both by differential pulse adsorptive stripping voltammetric and laser fluorimetric techniques. (author)

  7. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  8. Per tree estimates with n-tree distance sampling: an application to increment core data

    Science.gov (United States)

    Thomas B. Lynch; Robert F. Wittwer

    2002-01-01

    Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...

  9. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus Using Unstructured Sampling Data.

    Directory of Open Access Journals (Sweden)

    Femke Broekhuis

    Full Text Available Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  10. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data.

    Science.gov (United States)

    Broekhuis, Femke; Gopalaswamy, Arjun M

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  11. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  12. Comparison of particulate matter exposure estimates in young children from personal sampling equipment and a robotic sampler.

    Science.gov (United States)

    Sagona, Jessica A; Shalat, Stuart L; Wang, Zuocheng; Ramagopal, Maya; Black, Kathleen; Hernandez, Marta; Mainelis, Gediminas

    2017-05-01

    Accurate characterization of particulate matter (PM) exposure in young children is difficult, because personal samplers are often too heavy, bulky or impractical to be used. The Pretoddler Inhalable Particulate Environmental Robotic (PIPER) sampler was developed to help address this problem. In this study, we measured inhalable PM exposures in 2-year-olds via a lightweight personal sampler worn in a small backpack and evaluated the use of a robotic sampler with an identical sampling train for estimating PM exposure in this age group. PM mass concentrations measured by the personal sampler ranged from 100 to almost 1,200 μg/m 3 , with a median value of 331 μg/m 3 . PM concentrations measured by PIPER were considerably lower, ranging from 14 to 513 μg/m 3 with a median value of 56 μg/m 3 . Floor cleaning habits and activity patterns of the 2-year-olds varied widely by home; vigorous play and recent floor cleaning were most associated with higher personal exposure. Our findings highlight the need for additional characterization of children's activity patterns and their effect on personal exposures.

  13. Comparison of vapor sampling system (VSS) and in situ vapor sampling (ISVS) methods on Tanks C-107, BY-108, and S-102

    International Nuclear Information System (INIS)

    Huckaby, J.L.; Edwards, J.A.; Evans, J.C.

    1996-05-01

    The objective of this report is to evaluate the equivalency of two methods used to sample nonradioactive gases and vapors in the Hanford Site high-level waste tank headspaces. In addition to the comparison of the two sampling methods, the effects of an in-line fine particle filter on sampling results are also examined to determine whether results are adversely affected by its presence. This report discusses data from a January 1996 sampling

  14. Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method

    Science.gov (United States)

    Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.

    1982-01-01

    A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.

  15. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  16. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  17. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Estimation of Snow Parameters from Dual-Wavelength Airborne Radar

    Science.gov (United States)

    Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

    1997-01-01

    Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.

  19. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  20. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    Science.gov (United States)

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  1. PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS

    Energy Technology Data Exchange (ETDEWEB)

    He, Shiyuan; Huang, Jianhua Z.; Long, James [Department of Statistics, Texas A and M University, College Station, TX (United States); Yuan, Wenlong; Macri, Lucas M., E-mail: lmacri@tamu.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX (United States)

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.

  2. Estimation of salt intake from spot urine samples in patients with chronic kidney disease

    Directory of Open Access Journals (Sweden)

    Ogura Makoto

    2012-06-01

    Full Text Available Abstract Background High salt intake in patients with chronic kidney disease (CKD may cause high blood pressure and increased albuminuria. Although, the estimation of salt intake is essential, there are no easy methods to estimate real salt intake. Methods Salt intake was assessed by determining urinary sodium excretion from the collected urine samples. Estimation of salt intake by spot urine was calculated by Tanaka’s formula. The correlation between estimated and measured sodium excretion was evaluated by Pearson´s correlation coefficients. Performance of equation was estimated by median bias, interquartile range (IQR, proportion of estimates within 30% deviation of measured sodium excretion (P30 and root mean square error (RMSE.The sensitivity and specificity of estimated against measured sodium excretion were separately assessed by receiver-operating characteristic (ROC curves. Results A total of 334 urine samples from 96 patients were examined. Mean age was 58 ± 16 years, and estimated glomerular filtration rate (eGFR was 53 ± 27 mL/min. Among these patients, 35 had CKD stage 1 or 2, 39 had stage 3, and 22 had stage 4 or 5. Estimated sodium excretion significantly correlated with measured sodium excretion (R = 0.52, P 170 mEq/day (AUC 0.835. Conclusions The present study demonstrated that spot urine can be used to estimate sodium excretion, especially in patients with low eGFR.

  3. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    Science.gov (United States)

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  4. Automated CBED processing: Sample thickness estimation based on analysis of zone-axis CBED pattern

    Energy Technology Data Exchange (ETDEWEB)

    Klinger, M., E-mail: klinger@post.cz; Němec, M.; Polívka, L.; Gärtnerová, V.; Jäger, A.

    2015-03-15

    An automated processing of convergent beam electron diffraction (CBED) patterns is presented. The proposed methods are used in an automated tool for estimating the thickness of transmission electron microscopy (TEM) samples by matching an experimental zone-axis CBED pattern with a series of patterns simulated for known thicknesses. The proposed tool detects CBED disks, localizes a pattern in detected disks and unifies the coordinate system of the experimental pattern with the simulated one. The experimental pattern is then compared disk-by-disk with a series of simulated patterns each corresponding to different known thicknesses. The thickness of the most similar simulated pattern is then taken as the thickness estimate. The tool was tested on [0 1 1] Si, [0 1 0] α-Ti and [0 1 1] α-Ti samples prepared using different techniques. Results of the presented approach were compared with thickness estimates based on analysis of CBED patterns in two beam conditions. The mean difference between these two methods was 4.1% for the FIB-prepared silicon samples, 5.2% for the electro-chemically polished titanium and 7.9% for Ar{sup +} ion-polished titanium. The proposed techniques can also be employed in other established CBED analyses. Apart from the thickness estimation, it can potentially be used to quantify lattice deformation, structure factors, symmetry, defects or extinction distance. - Highlights: • Automated TEM sample thickness estimation using zone-axis CBED is presented. • Computer vision and artificial intelligence are employed in CBED processing. • This approach reduces operator effort, analysis time and increases repeatability. • Individual parts can be employed in other analyses of CBED/diffraction pattern.

  5. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  6. Reliable Quantification of the Potential for Equations Based on Spot Urine Samples to Estimate Population Salt Intake

    DEFF Research Database (Denmark)

    Huang, Liping; Crino, Michelle; Wu, Jason Hy

    2016-01-01

    to a standard format. Individual participant records will be compiled and a series of analyses will be completed to: (1) compare existing equations for estimating 24-hour salt intake from spot urine samples with 24-hour urine samples, and assess the degree of bias according to key demographic and clinical......BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean...... population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effects of factors such as ethnicity, sex, age, body mass index, antihypertensive drug use, health status...

  7. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  8. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  9. Automatic sampling for unbiased and efficient stereological estimation using the proportionator in biological studies

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas.  The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....

  10. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  11. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  12. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  13. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    Science.gov (United States)

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  14. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  15. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  16. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  17. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  18. Non-Destructive Lichen Biomass Estimation in Northwestern Alaska: A Comparison of Methods

    Science.gov (United States)

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J.

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa “community” samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m−2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska. PMID:25079228

  19. Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.

    Directory of Open Access Journals (Sweden)

    Abbey Rosso

    Full Text Available Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144 at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count, among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4% using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2. Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.

  20. Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.

    Science.gov (United States)

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.

  1. Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.

    Science.gov (United States)

    Niioka, Takenori

    2011-03-01

    Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.

  2. Comparison of regional index flood estimation procedures based on the extreme value type I distribution

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas Rodding; Rosbjerg, Dan

    2002-01-01

    the prediction uncertainty and that the presence of intersite correlation tends to increase the uncertainty. A simulation study revealed that in regional index-flood estimation the method of probability weighted moments is preferable to method of moment estimation with regard to bias and RMSE.......A comparison of different methods for estimating T-year events is presented, all based on the Extreme Value Type I distribution. Series of annual maximum flood from ten gauging stations at the New Zealand South island have been used. Different methods of predicting the 100-year event...... and the connected uncertainty have been applied: At-site estimation and regional index-flood estimation with and without accounting for intersite correlation using either the method of moments or the method of probability weighted moments for parameter estimation. Furthermore, estimation at ungauged sites were...

  3. A model for estimating the minimum number of offspring to sample in studies of reproductive success.

    Science.gov (United States)

    Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M

    2011-01-01

    Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.

  4. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    Science.gov (United States)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  5. Non-parametric adaptive importance sampling for the probability estimation of a launcher impact position

    International Nuclear Information System (INIS)

    Morio, Jerome

    2011-01-01

    Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.

  6. Validation and comparison of two sampling methods to assess dermal exposure to drilling fluids and crude oil.

    Science.gov (United States)

    Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez

    2014-06-01

    Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  7. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    Science.gov (United States)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  8. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  10. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    Energy Technology Data Exchange (ETDEWEB)

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  11. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  12. A Comparison of Regression Techniques for Estimation of Above-Ground Winter Wheat Biomass Using Near-Surface Spectroscopy

    Directory of Open Access Journals (Sweden)

    Jibo Yue

    2018-01-01

    Full Text Available Above-ground biomass (AGB provides a vital link between solar energy consumption and yield, so its correct estimation is crucial to accurately monitor crop growth and predict yield. In this work, we estimate AGB by using 54 vegetation indexes (e.g., Normalized Difference Vegetation Index, Soil-Adjusted Vegetation Index and eight statistical regression techniques: artificial neural network (ANN, multivariable linear regression (MLR, decision-tree regression (DT, boosted binary regression tree (BBRT, partial least squares regression (PLSR, random forest regression (RF, support vector machine regression (SVM, and principal component regression (PCR, which are used to analyze hyperspectral data acquired by using a field spectrophotometer. The vegetation indexes (VIs determined from the spectra were first used to train regression techniques for modeling and validation to select the best VI input, and then summed with white Gaussian noise to study how remote sensing errors affect the regression techniques. Next, the VIs were divided into groups of different sizes by using various sampling methods for modeling and validation to test the stability of the techniques. Finally, the AGB was estimated by using a leave-one-out cross validation with these powerful techniques. The results of the study demonstrate that, of the eight techniques investigated, PLSR and MLR perform best in terms of stability and are most suitable when high-accuracy and stable estimates are required from relatively few samples. In addition, RF is extremely robust against noise and is best suited to deal with repeated observations involving remote-sensing data (i.e., data affected by atmosphere, clouds, observation times, and/or sensor noise. Finally, the leave-one-out cross-validation method indicates that PLSR provides the highest accuracy (R2 = 0.89, RMSE = 1.20 t/ha, MAE = 0.90 t/ha, NRMSE = 0.07, CV (RMSE = 0.18; thus, PLSR is best suited for works requiring high

  13. Active/passive microwave sensor comparison of MIZ-ice concentration estimates. [Marginal Ice Zone (MIZ)

    Science.gov (United States)

    Burns, B. A.; Cavalieri, D. J.; Keller, M. R.

    1986-01-01

    Active and passive microwave data collected during the 1984 summer Marginal Ice Zone Experiment in the Fram Strait (MIZEX 84) are used to compare ice concentration estimates derived from synthetic aperture radar (SAR) data to those obtained from passive microwave imagery at several frequencies. The comparison is carried out to evaluate SAR performance against the more established passive microwave technique, and to investigate discrepancies in terms of how ice surface conditions, imaging geometry, and choice of algorithm parameters affect each sensor. Active and passive estimates of ice concentration agree on average to within 12%. Estimates from the multichannel passive microwave data show best agreement with the SAR estimates because the multichannel algorithm effectively accounts for the range in ice floe brightness temperatures observed in the MIZ.

  14. Estimating cross-validatory predictive p-values with integrated importance sampling for disease mapping models.

    Science.gov (United States)

    Li, Longhai; Feng, Cindy X; Qiu, Shi

    2017-06-30

    An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  16. Estimation of DSGE Models under Diffuse Priors and Data-Driven Identification Constraints

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem of multimo......We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem...... the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out-of-sample forecast comparisons as well as Bayes factors lend support to the constrained model....

  17. Self-estimates of attention performance

    Directory of Open Access Journals (Sweden)

    CHRISTOPH MENGELKAMP

    2007-09-01

    Full Text Available In research on self-estimated IQ, gender differences are often found. The present study investigates whether these findings are true for self-estimation of attention, too. A sample of 100 female and 34 male students were asked to fill in the test of attention d2. After taking the test, the students estimated their results in comparison to their fellow students. The results show that the students underestimate their percent rank compared with the actual percent rank they achieved in the test, but estimate their rank order fairly accurately. Moreover, males estimate their performance distinctly higher than females do. This last result remains true even when the real test score is statistically controlled. The results are discussed with regard to research on positive illusions and gender stereotypes.

  18. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  19. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  20. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    Science.gov (United States)

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  3. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    Science.gov (United States)

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-05

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. © 2016 The Authors.

  4. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  5. Estimation of technetium 99m mercaptoacetyltriglycine plasma clearance by use of one single plasma sample

    International Nuclear Information System (INIS)

    Mueller-Suur, R.; Magnusson, G.; Karolinska Inst., Stockholm; Bois-Svensson, I.; Jansson, B.

    1991-01-01

    Recent studies have shown that technetium 99m mercaptoacetyltriglycine (MAG-3) is a suitable replacement for iodine 131 or 123 hippurate in gamma-camera renography. Also, the determination of its clearance is of value, since it correlates well with that of hippurate and thus may be an indirect measure of renal plasma flow. In order to simplify the clearance method we developed formulas for the estimation of plasma clearance of MAG-3 based on a single plasma sample and compared them with the multiple sample method based on 7 plasma samples. The correlation to effective renal plasma flow (ERPF) (according to Tauxe's method, using iodine 123 hippurate), which ranged from 75 to 654 ml/min per 1.73 m 2 , was determined in these patients. Using the developed regression equations the error of estimate for the simplified clearance method was acceptably low (18-14 ml/min), when the single plasma sample was taken 44-64 min post-injection. Formulas for different sampling times at 44, 48, 52, 56, 60 and 64 min are given, and we recommend 60 min as optimal, with an error of estimate of 15.5 ml/min. The correlation between the MAG-3 clearances and ERPF was high (r=0.90). Since normal values for MAG-3 clearance are not yet available, transformation to estimated ERPF values by the regression equation (ERPF=1.86xC MAG-3 +4.6) could be of clinical value in order to compare it with the normal values for ERPF given in the literature. (orig.)

  6. Comparison of Proteins in Whole Blood and Dried Blood Spot Samples by LC/MS/MS

    Science.gov (United States)

    Chambers, Andrew G.; Percy, Andrew J.; Hardie, Darryl B.; Borchers, Christoph H.

    2013-09-01

    Dried blood spot (DBS) sampling methods are desirable for population-wide biomarker screening programs because of their ease of collection, transportation, and storage. Immunoassays are traditionally used to quantify endogenous proteins in these samples but require a separate assay for each protein. Recently, targeted mass spectrometry (MS) has been proposed for generating highly-multiplexed assays for biomarker proteins in DBS samples. In this work, we report the first comparison of proteins in whole blood and DBS samples using an untargeted MS approach. The average number of proteins identified in undepleted whole blood and DBS samples by liquid chromatography (LC)/MS/MS was 223 and 253, respectively. Protein identification repeatability was between 77 %-92 % within replicates and the majority of these repeated proteins (70 %) were observed in both sample formats. Proteins exclusively identified in the liquid or dried fluid spot format were unbiased based on their molecular weight, isoelectric point, aliphatic index, and grand average hydrophobicity. In addition, we extended this comparison to include proteins in matching plasma and serum samples with their dried fluid spot equivalents, dried plasma spot (DPS), and dried serum spot (DSS). This work begins to define the accessibility of endogenous proteins in dried fluid spot samples for analysis by MS and is useful in evaluating the scope of this new approach.

  7. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Directory of Open Access Journals (Sweden)

    João Fabrício Mota Rodrigues

    Full Text Available Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  8. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  9. Comparison of reactivity estimation performance between two extended Kalman filtering schemes

    International Nuclear Information System (INIS)

    Peng, Xingjie; Cai, Yun; Li, Qing; Wang, Kan

    2016-01-01

    Highlights: • The performances of two EKF schemes using different Jacobian matrices are compared. • Numerical simulations are used for the validation and comparison of these two EKF schemes. • The simulation results show that the EKF scheme adopted by this paper performs better than the one adopted by previous literatures. - Abstract: The extended Kalman filtering (EKF) technique has been utilized in the estimation of reactivity which is a significantly important parameter to indicate the status of the nuclear reactor. In this paper, the performances of two EKF schemes using different Jacobian matrices are compared. Numerical simulations are used for the validation and comparison of these two EKF schemes, and the results show that the Jacobian matrix obtained directly from the discrete-time state model performs better than the one which is the discretization form of the Jacobian matrix obtained from the continuous-time state model.

  10. Comparison of breast percent density estimation from raw versus processed digital mammograms

    Science.gov (United States)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  11. The efficiency of modified jackknife and ridge type regression estimators: a comparison

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2008-09-01

    Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.

  12. Comparison among Models to Estimate the Shielding Effectiveness Applied to Conductive Textiles

    Directory of Open Access Journals (Sweden)

    Alberto Lopez

    2013-01-01

    Full Text Available The purpose of this paper is to present a comparison among two models and its measurement to calculate the shielding effectiveness of electromagnetic barriers, applying it to conductive textiles. Each one, models a conductive textile as either a (1 wire mesh screen or (2 compact material. Therefore, the objective is to perform an analysis of the models in order to determine which one is a better approximation for electromagnetic shielding fabrics. In order to provide results for the comparison, the shielding effectiveness of the sample has been measured by means of the standard ASTM D4935-99.

  13. National comparison on volume sample activity measurement methods

    International Nuclear Information System (INIS)

    Sahagia, M.; Grigorescu, E.L.; Popescu, C.; Razdolescu, C.

    1992-01-01

    A national comparison on volume sample activity measurements methods may be regarded as a step toward accomplishing the traceability of the environmental and food chain activity measurements to national standards. For this purpose, the Radionuclide Metrology Laboratory has distributed 137 Cs and 134 Cs water-equivalent solid standard sources to 24 laboratories having responsibilities in this matter. Every laboratory has to measure the activity of the received source(s) by using its own standards, equipment and methods and report the obtained results to the organizer. The 'measured activities' will be compared with the 'true activities'. A final report will be issued, which plans to evaluate the national level of precision of such measurements and give some suggestions for improvement. (Author)

  14. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  15. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  16. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  17. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  18. Uncertainty Estimation of Neutron Activation Analysis in Zinc Elemental Determination in Food Samples

    International Nuclear Information System (INIS)

    Endah Damastuti; Muhayatun; Diah Dwiana L

    2009-01-01

    Beside to complished the requirements of international standard of ISO/IEC 17025:2005, uncertainty estimation should be done to increase quality and confidence of analysis results and also to establish traceability of the analysis results to SI unit. Neutron activation analysis is a major technique used by Radiometry technique analysis laboratory and is included as scope of accreditation under ISO/IEC 17025:2005, therefore uncertainty estimation of neutron activation analysis is needed to be carried out. Sample and standard preparation as well as, irradiation and measurement using gamma spectrometry were the main activities which could give contribution to uncertainty. The components of uncertainty sources were specifically explained. The result of expanded uncertainty was 4,0 mg/kg with level of confidence 95% (coverage factor=2) and Zn concentration was 25,1 mg/kg. Counting statistic of cuplikan and standard were the major contribution of combined uncertainty. The uncertainty estimation was expected to increase the quality of the analysis results and could be applied further to other kind of samples. (author)

  19. Some remarks on estimating a covariance structure model from a sample correlation matrix

    OpenAIRE

    Maydeu Olivares, Alberto; Hernández Estrada, Adolfo

    2000-01-01

    A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...

  20. Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.

    Science.gov (United States)

    Besio, W; Aakula, R; Dai, W

    2004-01-01

    Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.

  1. Systematic comparison of static and dynamic headspace sampling techniques for gas chromatography.

    Science.gov (United States)

    Kremser, Andreas; Jochmann, Maik A; Schmidt, Torsten C

    2016-09-01

    Six automated, headspace-based sample preparation techniques were used to extract volatile analytes from water with the goal of establishing a systematic comparison between commonly available instrumental alternatives. To that end, these six techniques were used in conjunction with the same gas chromatography instrument for analysis of a common set of volatile organic carbon (VOC) analytes. The methods were thereby divided into three classes: static sampling (by syringe or loop), static enrichment (SPME and PAL SPME Arrow), and dynamic enrichment (ITEX and trap sampling). For PAL SPME Arrow, different sorption phase materials were also included in the evaluation. To enable an effective comparison, method detection limits (MDLs), relative standard deviations (RSDs), and extraction yields were determined and are discussed for all techniques. While static sampling techniques exhibited sufficient extraction yields (approx. 10-20 %) to be reliably used down to approx. 100 ng L(-1), enrichment techniques displayed extraction yields of up to 80 %, resulting in MDLs down to the picogram per liter range. RSDs for all techniques were below 27 %. The choice on one of the different instrumental modes of operation (aforementioned classes) was thereby the most influential parameter in terms of extraction yields and MDLs. Individual methods inside each class showed smaller deviations, and the least influences were observed when evaluating different sorption phase materials for the individual enrichment techniques. The option of selecting specialized sorption phase materials may, however, be more important when analyzing analytes with different properties such as high polarity or the capability of specific molecular interactions. Graphical Abstract PAL SPME Arrow during the extraction of volatile analytes from the headspace of an aqueous sample.

  2. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  3. Correcting for Systematic Bias in Sample Estimates of Population Variances: Why Do We Divide by n-1?

    Science.gov (United States)

    Mittag, Kathleen Cage

    An important topic presented in introductory statistics courses is the estimation of population parameters using samples. Students learn that when estimating population variances using sample data, we always get an underestimate of the population variance if we divide by n rather than n-1. One implication of this correction is that the degree of…

  4. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    Science.gov (United States)

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  5. Genotyping faecal samples of Bengal tiger Panthera tigris tigris for population estimation: A pilot study

    Directory of Open Access Journals (Sweden)

    Singh Lalji

    2006-10-01

    Full Text Available Abstract Background Bengal tiger Panthera tigris tigris the National Animal of India, is an endangered species. Estimating populations for such species is the main objective for designing conservation measures and for evaluating those that are already in place. Due to the tiger's cryptic and secretive behaviour, it is not possible to enumerate and monitor its populations through direct observations; instead indirect methods have always been used for studying tigers in the wild. DNA methods based on non-invasive sampling have not been attempted so far for tiger population studies in India. We describe here a pilot study using DNA extracted from faecal samples of tigers for the purpose of population estimation. Results In this study, PCR primers were developed based on tiger-specific variations in the mitochondrial cytochrome b for reliably identifying tiger faecal samples from those of sympatric carnivores. Microsatellite markers were developed for the identification of individual tigers with a sibling Probability of Identity of 0.005 that can distinguish even closely related individuals with 99.9% certainty. The effectiveness of using field-collected tiger faecal samples for DNA analysis was evaluated by sampling, identification and subsequently genotyping samples from two protected areas in southern India. Conclusion Our results demonstrate the feasibility of using tiger faecal matter as a potential source of DNA for population estimation of tigers in protected areas in India in addition to the methods currently in use.

  6. A comparison of some radionuclide contents in environmental samples

    International Nuclear Information System (INIS)

    Shiraishi, Kunio; Muramatsu, Yasuyuki; Nakajima, Toshiyuki; Yamamoto, Masayoshi; Los, I.P.; Kamarikov, I.Yu.; Buzinny, M.G.

    1992-01-01

    Global contamination by radionuclides was likely induced through the Chernobyl nuclear accident in 1986. Environmental samples such as fish, milk, total diet samples etc., collected in Kiev, in the vicinity of Chernobyl, and Mito city, Japan, were analyzed for six selected radionuclides. After samples were dry-ashed, radioactivities of Cs-137, Cs-134, K-40, Co-60, and Mn-54 were determined by gamma-ray spectroscopy with a germanium detector. Strontium-90 was determined by low-background beta-spectrometry after chemical separations by fuming nitric acid. Concentrations of radioactivities in the Kiev samples, in the vicinity of the Chernobyl, are shown below. For comparison, values obtained in Japan, including literature values, are shown in parentheses. Radioactivities in airborne dust were: Sr-90, 63 mBq/m 3 (0.001-0.1); Cs-137, 26 mBg/m 3 (0.001-1); Cs-134, 4 mBq/m 3 ; Co-60, 4 mBq/m 3 ; Mn-54, 2 mBq/m 3 . Radioactivities of milk were as follows; Sr-90, 0.25-1.2 Bq/liter (0.01-0.1); Cs-137, 6-77 Bq/liter (0.01-1); Cs-134, 2-8 Bq/liter. Radioactivities of Sr-90 and Cs-137 for fish (carp), were found to be 3-75 Bq/kg-fresh (0.76-0.98) and 46-2130 Bq/kg-fresh (<0.8), respectively. It was observed that the daily intake of Sr-90 and Cs-137 were 0.25 Bq (0.1) and 0.43 Bq (0.1) per person, respectively. Due to the small number of samples analyzed it is premature to draw a firm conclusion from this study. However, the levels of radionuclides in environmental samples were found to differ between Kiev and Mito with wide ranges depending on the samples. (author)

  7. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  8. Sex Estimation From Modern American Humeri and Femora, Accounting for Sample Variance Structure

    DEFF Research Database (Denmark)

    Boldsen, J. L.; Milner, G. R.; Boldsen, S. K.

    2015-01-01

    several decades. Results: For measurements individually and collectively, the probabilities of being one sex or the other were generated for samples with an equal distribution of males and females, taking into account the variance structure of the original measurements. The combination providing the best......Objectives: A new procedure for skeletal sex estimation based on humeral and femoral dimensions is presented, based on skeletons from the United States. The approach specifically addresses the problem that arises from a lack of variance homogeneity between the sexes, taking into account prior...... information about the sample's sex ratio, if known. Material and methods: Three measurements useful for estimating the sex of adult skeletons, the humeral and femoral head diameters and the humeral epicondylar breadth, were collected from 258 Americans born between 1893 and 1980 who died within the past...

  9. Comparison of vapor sampling system (VSS) and in situ vapor sampling (ISVS) methods on Tanks C-107, BY-108, and S-102. Revision 1

    International Nuclear Information System (INIS)

    Huckaby, J.L.; Edwards, J.A.; Evans, J.C.

    1996-08-01

    This report discusses comparison tests for two methods of collecting vapor samples from the Hanford Site high-level radioactive waste tank headspaces. The two sampling methods compared are the truck-mounted vapor sampling system (VSS) and the cart-mounted in-situ vapor sampling (ISVS). Three tanks were sampled by both the VSS and ISVS methods from the same access risers within the same 8-hour period. These tanks have diverse headspace compositions and they represent the highest known level of several key vapor analytes

  10. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  11. Empirical insights and considerations for the OBT inter-laboratory comparison of environmental samples.

    Science.gov (United States)

    Kim, Sang-Bog; Roche, Jennifer

    2013-08-01

    Organically bound tritium (OBT) is an important tritium species that can be measured in most environmental samples, but has only recently been recognized as a species of tritium in these samples. Currently, OBT is not routinely measured by environmental monitoring laboratories around the world. There are no certified reference materials (CRMs) for environmental samples. Thus, quality assurance (QA), or verification of the accuracy of the OBT measurement, is not possible. Alternatively, quality control (QC), or verification of the precision of the OBT measurement, can be achieved. In the past, there have been differences in OBT analysis results between environmental laboratories. A possible reason for the discrepancies may be differences in analytical methods. Therefore, inter-laboratory OBT comparisons among the environmental laboratories are important and would provide a good opportunity for adopting a reference OBT analytical procedure. Due to the analytical issues, only limited information is available on OBT measurement. Previously conducted OBT inter-laboratory practices are reviewed and the findings are described. Based on our experiences, a few considerations were suggested for the international OBT inter-laboratory comparison exercise to be completed in the near future. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  12. Comparison of 3 estimation methods of mycophenolic acid AUC based on a limited sampling strategy in renal transplant patients.

    Science.gov (United States)

    Hulin, Anne; Blanchet, Benoît; Audard, Vincent; Barau, Caroline; Furlan, Valérie; Durrbach, Antoine; Taïeb, Fabrice; Lang, Philippe; Grimbert, Philippe; Tod, Michel

    2009-04-01

    A significant relationship between mycophenolic acid (MPA) area under the plasma concentration-time curve (AUC) and the risk for rejection has been reported. Based on 3 concentration measurements, 3 approaches have been proposed for the estimation of MPA AUC, involving either a multilinear regression approach model (MLRA) or a Bayesian estimation using either gamma absorption or zero-order absorption population models. The aim of the study was to compare the 3 approaches for the estimation of MPA AUC in 150 renal transplant patients treated with mycophenolate mofetil and tacrolimus. The population parameters were determined in 77 patients (learning study). The AUC estimation methods were compared in the learning population and in 73 patients from another center (validation study). In the latter study, the reference AUCs were estimated by the trapezoidal rule on 8 measurements. MPA concentrations were measured by liquid chromatography. The gamma absorption model gave the best fit. In the learning study, the AUCs estimated by both Bayesian methods were very similar, whereas the multilinear approach was highly correlated but yielded estimates about 20% lower than Bayesian methods. This resulted in dosing recommendations differing by 250 mg/12 h or more in 27% of cases. In the validation study, AUC estimates based on the Bayesian method with gamma absorption model and multilinear regression approach model were, respectively, 12% higher and 7% lower than the reference values. To conclude, the bicompartmental model with gamma absorption rate gave the best fit. The 3 AUC estimation methods are highly correlated but not concordant. For a given patient, the same estimation method should always be used.

  13. Matrix algebra and sampling theory : The case of the Horvitz-Thompson estimator

    NARCIS (Netherlands)

    Dol, W.; Steerneman, A.G.M.; Wansbeek, T.J.

    Matrix algebra is a tool not commonly employed in sampling theory. The intention of this paper is to help change this situation by showing, in the context of the Horvitz-Thompson (HT) estimator, the convenience of the use of a number of matrix-algebra results. Sufficient conditions for the

  14. Event-based state estimation for a class of complex networks with time-varying delays: A comparison principle approach

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenbing [Department of Mathematics, Yangzhou University, Yangzhou 225002 (China); Wang, Zidong [Department of Computer Science, Brunel University London, Uxbridge, Middlesex, UB8 3PH (United Kingdom); Liu, Yurong, E-mail: yrliu@yzu.edu.cn [Department of Mathematics, Yangzhou University, Yangzhou 225002 (China); Communication Systems and Networks (CSN) Research Group, Faculty of Engineering, King Abdulaziz University, Jeddah 21589 (Saudi Arabia); Ding, Derui [Shanghai Key Lab of Modern Optical System, Department of Control Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093 (China); Alsaadi, Fuad E. [Communication Systems and Networks (CSN) Research Group, Faculty of Engineering, King Abdulaziz University, Jeddah 21589 (Saudi Arabia)

    2017-01-05

    The paper is concerned with the state estimation problem for a class of time-delayed complex networks with event-triggering communication protocol. A novel event generator function, which is dependent not only on the measurement output but also on a predefined positive constant, is proposed with hope to reduce the communication burden. A new concept of exponentially ultimate boundedness is provided to quantify the estimation performance. By means of the comparison principle, some sufficient conditions are obtained to guarantee that the estimation error is exponentially ultimately bounded, and then the estimator gains are obtained in terms of the solution of certain matrix inequalities. Furthermore, a rigorous proof is proposed to show that the designed triggering condition is free of the Zeno behavior. Finally, a numerical example is given to illustrate the effectiveness of the proposed event-based estimator. - Highlights: • An event-triggered estimator is designed for complex networks with time-varying delays. • A novel event generator function is proposed to reduce the communication burden. • The comparison principle is utilized to derive the sufficient conditions. • The designed triggering condition is shown to be free of the Zeno behavior.

  15. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry

    Energy Technology Data Exchange (ETDEWEB)

    Manios, G.E.; Mazonakis, M.; Damilakis, J. [University of Crete, Department of Medical Physics, Faculty of Medicine, Heraklion, Crete (Greece); Voulgaris, C.; Karantanas, A. [University of Crete, Department of Radiology, Faculty of Medicine, Heraklion, Crete (Greece)

    2016-03-15

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95 % limits of agreement were also acceptable (VAF: -16.5 %, 16.1 %; SAF: -10.8 %, 10.7 %) and the repeatability of stereology was good (VAF: CV = 4.5 %, SAF: CV = 3.2 %). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. (orig.)

  16. Comparison of performance of some common Hartmann-Shack centroid estimation methods

    Science.gov (United States)

    Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.

    2016-03-01

    The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.

  17. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  18. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  19. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    Science.gov (United States)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  20. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    Science.gov (United States)

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-01-01

    The synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed.The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the

  1. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    Science.gov (United States)

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-05-01

    SummaryThe synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed. The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent

  2. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Energy Technology Data Exchange (ETDEWEB)

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  3. Estimation of uranium in bioassay samples of occupational workers by laser fluorimetry

    International Nuclear Information System (INIS)

    Suja, A.; Prabhu, S.P.; Sawant, P.D.; Sarkar, P.K.; Tiwari, A.K.; Sharma, R.

    2010-01-01

    A newly established uranium processing facility has been commissioned at BARC, Trombay. Monitoring of occupational workers at regulars intervals is essential to assess intake of uranium by the workers in this facility. The design and engineering safety features of the plant are such that there is very low probability of uranium getting air borne during normal operations. However, the leakages from the system during routine maintenance of the plant may result in intake of uranium by workers. As per the new biokinetic model for uranium, 63% of uranium entering the blood stream gets directly excreted in urine. Therefore, bioassay monitoring (urinalysis) was recommended for these workers. A group of 21 workers was selected for bioassay monitoring to assess the existing urinary excretion levels of uranium before the commencement of actual work. For this purpose, sample collection kit along with an instruction slip was provided to the workers. Bioassay samples received were wet ashed with conc. nitric acid and hydrogen peroxide to break down the metabolized complexes of uranium and it was co-precipitated with calcium phosphate. Separation of uranium from the matrix was done using ion exchange technique and final activity quantification in these samples was done using laser fluorimeter (Quantalase, Model No. NFL/02). Calibration of the laser fluorimeter is done using 10 ppb uranium standard (WHO, France Ref. No. 180000). Verification of the system performance is done by measuring concentration of uranium in the standards (1 ppb to 100 ppb). Standard addition method was followed for estimation of uranium concentration in the samples. Uranyl ions present in the sample get excited by pulsed nitrogen laser at 337.1 nm, and on de-excitation emit fluorescence light (540 nm) intensity which is measured by the PMT. To estimate the uranium in the bioassay samples, a known aliquot of the sample was mixed with 5% sodium pyrophosphate and fluorescence intensity was measured

  4. Comparison of two methods for estimating the number of undocumented Mexican adults in Los Angeles County.

    Science.gov (United States)

    Heer, D M; Passel, J F

    1987-01-01

    This article compares 2 different methods for estimating the number of undocumented Mexican adults in Los Angeles County. The 1st method, the survey-based method, uses a combination of 1980 census data and the results of a survey conducted in Los Angeles County in 1980 and 1981. A sample was selected from babies born in Los Angeles County who had a mother or father of Mexican origin. The survey included questions about the legal status of the baby's parents and certain other relatives. The resulting estimates of undocumented Mexican immigrants are for males aged 18-44 and females aged 18-39. The 2nd method, the residual method, involves comparison of census figures for aliens counted with estimates of legally-resident aliens developed principally with data from the Immigration and Naturalization Service (INS). For this study, estimates by age, sex, and period of entry were produced for persons born in Mexico and living in Los Angeles County. The results of this research indicate that it is possible to measure undocumented immigration with different techniques, yet obtain results that are similar. Both techniques presented here are limited in that they represent estimates of undocumented aliens based on the 1980 census. The number of additional undocumented aliens not counted remains a subject of conjecture. The fact that the proportions undocumented shown in the survey (228,700) are quite similar to the residual estimates (317,800) suggests that the number of undocumented aliens not counted in the census may not be an extremely large fraction of the undocumented population. The survey-based estimates have some significant advantages over the residual estimates. The survey provides tabulations of the undocumented population by characteristics other than the limited demographic information provided by the residual technique. On the other hand, the survey-based estimates require that a survey be conducted and, if national or regional estimates are called for, they may

  5. Estimation of sampling error uncertainties in observed surface air temperature change in China

    Science.gov (United States)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  6. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...

  7. Evaluation of spot and passive sampling for monitoring, flux estimation and risk assessment of pesticides within the constraints of a typical regulatory monitoring scheme.

    Science.gov (United States)

    Zhang, Zulin; Troldborg, Mads; Yates, Kyari; Osprey, Mark; Kerr, Christine; Hallett, Paul D; Baggaley, Nikki; Rhind, Stewart M; Dawson, Julian J C; Hough, Rupert L

    2016-11-01

    In many agricultural catchments of Europe and North America, pesticides occur at generally low concentrations with significant temporal variation. This poses several challenges for both monitoring and understanding ecological risks/impacts of these chemicals. This study aimed to compare the performance of passive and spot sampling strategies given the constraints of typical regulatory monitoring. Nine pesticides were investigated in a river currently undergoing regulatory monitoring (River Ugie, Scotland). Within this regulatory framework, spot and passive sampling were undertaken to understand spatiotemporal occurrence, mass loads and ecological risks. All the target pesticides were detected in water by both sampling strategies. Chlorotoluron was observed to be the dominant pesticide by both spot (maximum: 111.8ng/l, mean: 9.35ng/l) and passive sampling (maximum: 39.24ng/l, mean: 4.76ng/l). The annual pesticide loads were estimated to be 2735g and 1837g based on the spot and passive sampling data, respectively. The spatiotemporal trend suggested that agricultural activities were the primary source of the compounds with variability in loads explained in large by timing of pesticide applications and rainfall. The risk assessment showed chlorotoluron and chlorpyrifos posed the highest ecological risks with 23% of the chlorotoluron spot samples and 36% of the chlorpyrifos passive samples resulting in a Risk Quotient greater than 0.1. This suggests that mitigation measures might need to be taken to reduce the input of pesticides into the river. The overall comparison of the two sampling strategies supported the hypothesis that passive sampling tends to integrate the contaminants over a period of exposure and allows quantification of contamination at low concentration. The results suggested that within a regulatory monitoring context passive sampling was more suitable for flux estimation and risk assessment of trace contaminants which cannot be diagnosed by spot

  8. Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning

    Science.gov (United States)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2016-08-01

    An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact on deep learning and other machine-learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggest it will do so with an instance-dependent effective temperature, different from its physical temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a special class of a restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep-learning architectures. We also provide a comparison to k -step contrastive divergence (CD-k ) with k up to 100. Although assuming a suitable fixed effective temperature also allows us to outperform one-step contrastive divergence (CD-1), only when using an instance-dependent effective temperature do we find a performance close to that of CD-100 for the case studied here.

  9. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  10. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  11. Comparison of Experimental Methods for Estimating Matrix Diffusion Coefficients for Contaminant Transport Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Telfeyan, Katherine Christina [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ware, Stuart Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Kay Hanson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-06

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

  12. Comparison of experimental methods for estimating matrix diffusion coefficients for contaminant transport modeling

    Science.gov (United States)

    Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.

    2018-02-01

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

  13. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  14. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  15. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  16. Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.

    Science.gov (United States)

    Azzari, Lucio; Foi, Alessandro

    2014-08-01

    We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.

  17. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  18. Two-Sample Two-Stage Least Squares (TSTSLS estimates of earnings mobility: how consistent are they?

    Directory of Open Access Journals (Sweden)

    John Jerrim

    2016-08-01

    Full Text Available Academics and policymakers have shown great interest in cross-national comparisons of intergenerational earnings mobility. However, producing consistent and comparable estimates of earnings mobility is not a trivial task. In most countries researchers are unable to observe earnings information for two generations. They are thus forced to rely upon imputed data from different surveys instead. This paper builds upon previous work by considering the consistency of the intergenerational correlation (ρ as well as the elasticity (β, how this changes when using a range of different instrumental (imputer variables, and highlighting an important but infrequently discussed measurement issue. Our key finding is that, while TSTSLS estimates of β and ρ are both likely to be inconsistent, the magnitude of this problem is much greater for the former than it is for the latter. We conclude by offering advice on estimating earnings mobility using this methodology.

  19. Background estimation in short-wave region during determination of total sample composition by x-ray fluorescence method

    International Nuclear Information System (INIS)

    Simakov, V.A.; Kordyukov, S.V.; Petrov, E.N.

    1988-01-01

    Method of background estimation in short-wave spectral region during determination of total sample composition by X-ray fluorescence method is described. 13 types of different rocks with considerable variations of base composition and Zr, Nb, Th, U content below 7x10 -3 % are investigated. The suggested method of background accounting provides for a less statistical error of the background estimation than direct isolated measurement and reliability of its determination in a short-wave region independent on the sample base. Possibilities of suggested method for artificial mixtures conforming by the content of main component to technological concemtrates - niobium, zirconium, tantalum are estimated

  20. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  1. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  2. Estimating black bear density in New Mexico using noninvasive genetic sampling coupled with spatially explicit capture-recapture methods

    Science.gov (United States)

    Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.

    2016-01-01

    During the 2004–2005 to 2015–2016 hunting seasons, the New Mexico Department of Game and Fish (NMDGF) estimated black bear abundance (Ursus americanus) across the state by coupling density estimates with the distribution of primary habitat generated by Costello et al. (2001). These estimates have been used to set harvest limits. For example, a density of 17 bears/100 km2 for the Sangre de Cristo and Sacramento Mountains and 13.2 bears/100 km2 for the Sandia Mountains were used to set harvest levels. The advancement and widespread acceptance of non-invasive sampling and mark-recapture methods, prompted the NMDGF to collaborate with the New Mexico Cooperative Fish and Wildlife Research Unit and New Mexico State University to update their density estimates for black bear populations in select mountain ranges across the state.We established 5 study areas in 3 mountain ranges: the northern (NSC; sampled in 2012) and southern Sangre de Cristo Mountains (SSC; sampled in 2013), the Sandia Mountains (Sandias; sampled in 2014), and the northern (NSacs) and southern Sacramento Mountains (SSacs; both sampled in 2014). We collected hair samples from black bears using two concurrent non-invasive sampling methods, hair traps and bear rubs. We used a gender marker and a suite of microsatellite loci to determine the individual identification of hair samples that were suitable for genetic analysis. We used these data to generate mark-recapture encounter histories for each bear and estimated density in a spatially explicit capture-recapture framework (SECR). We constructed a suite of SECR candidate models using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We used Akaike’s Information Criterion corrected for small sample size (AICc) to rank and select the most supported model from which we estimated density.We set 554 hair traps, 117 bear rubs and collected 4,083 hair

  3. A Class of Estimators for Finite Population Mean in Double Sampling under Nonresponse Using Fractional Raw Moments

    Directory of Open Access Journals (Sweden)

    Manzoor Khan

    2014-01-01

    Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.

  4. Estimating average shock pressures recorded by impactite samples based on universal stage investigations of planar deformation features in quartz - Sources of error and recommendations

    Science.gov (United States)

    Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.

    2018-01-01

    Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.

  5. Spacecraft Trajectory Estimation Using a Sampled-Data Extended Kalman Filter with Range-Only Measurements

    National Research Council Canada - National Science Library

    Erwin, R. S; Bernstein, Dennis S

    2005-01-01

    .... In this paper we use a sampled-data extended Kalman Filter to estimate the trajectory or a target satellite when only range measurements are available from a constellation or orbiting spacecraft...

  6. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  7. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  8. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    Science.gov (United States)

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  9. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  10. Estimating human exposure to perfluoroalkyl acids via solid food and drinks: Implementation and comparison of different dietary assessment methods.

    Science.gov (United States)

    Papadopoulou, Eleni; Poothong, Somrutai; Koekkoek, Jacco; Lucattini, Luisa; Padilla-Sánchez, Juan Antonio; Haugen, Margaretha; Herzke, Dorte; Valdersnes, Stig; Maage, Amund; Cousins, Ian T; Leonards, Pim E G; Småstuen Haug, Line

    2017-10-01

    Diet is a major source of human exposure to hazardous environmental chemicals, including many perfluoroalkyl acids (PFAAs). Several assessment methods of dietary exposure to PFAAs have been used previously, but there is a lack of comparisons between methods. To assess human exposure to PFAAs through diet by different methods and compare the results. We studied the dietary exposure to PFAAs in 61 Norwegian adults (74% women, average age: 42 years) using three methods: i) by measuring daily PFAA intakes through a 1-day duplicate diet study (separately in solid and liquid foods), ii) by estimating intake after combining food contamination with food consumption data, as assessed by 2-day weighted food diaries and iii) by a Food Frequency Questionnaire (FFQ). We used existing food contamination data mainly from samples purchased in Norway and if not available, data from food purchased in other European countries were used. Duplicate diet samples (n=122) were analysed by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) to quantify 15 PFAAs (11 perfluoroalkyl carboxylates and 4 perfluoroalkyl sulfonates). Differences and correlations between measured and estimated intakes were assessed. The most abundant PFAAs in the duplicate diet samples were PFOA, PFOS and PFHxS and the median total intakes were 5.6ng/day, 11ng/day and 0.78ng/day, respectively. PFOS and PFOA concentrations were higher in solid than liquid samples. PFOS was the main contributor to the contamination in the solid samples (median concentration 14pg/g food), while it was PFOA in the liquid samples (median concentrations: 0.72pg/g food). High intakes of fats, oils, and eggs were statistically significantly related to high intakes of PFOS and PFOA from solid foods. High intake of milk and consumption of alcoholic beverages, as well as food in paper container were related to high PFOA intakes from liquid foods. PFOA intakes derived from food diary and FFQ were significantly higher than

  11. Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.

    2011-01-01

    Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.

  12. Potential of ALOS2 and NDVI to Estimate Forest Above-Ground Biomass, and Comparison with Lidar-Derived Estimates

    Directory of Open Access Journals (Sweden)

    Gaia Vaglio Laurin

    2016-12-01

    Full Text Available Remote sensing supports carbon estimation, allowing the upscaling of field measurements to large extents. Lidar is considered the premier instrument to estimate above ground biomass, but data are expensive and collected on-demand, with limited spatial and temporal coverage. The previous JERS and ALOS SAR satellites data were extensively employed to model forest biomass, with literature suggesting signal saturation at low-moderate biomass values, and an influence of plot size on estimates accuracy. The ALOS2 continuity mission since May 2014 produces data with improved features with respect to the former ALOS, such as increased spatial resolution and reduced revisit time. We used ALOS2 backscatter data, testing also the integration with additional features (SAR textures and NDVI from Landsat 8 data together with ground truth, to model and map above ground biomass in two mixed forest sites: Tahoe (California and Asiago (Alps. While texture was useful to improve the model performance, the best model was obtained using joined SAR and NDVI (R2 equal to 0.66. In this model, only a slight saturation was observed, at higher levels than what usually reported in literature for SAR; the trend requires further investigation but the model confirmed the complementarity of optical and SAR datatypes. For comparison purposes, we also generated a biomass map for Asiago using lidar data, and considered a previous lidar-based study for Tahoe; in these areas, the observed R2 were 0.92 for Tahoe and 0.75 for Asiago, respectively. The quantitative comparison of the carbon stocks obtained with the two methods allows discussion of sensor suitability. The range of local variation captured by lidar is higher than those by SAR and NDVI, with the latter showing overestimation. However, this overestimation is very limited for one of the study areas, suggesting that when the purpose is the overall quantification of the stored carbon, especially in areas with high carbon

  13. Testing of a method of importance sampling for use with SYVAC

    International Nuclear Information System (INIS)

    Dalrymple, G.J.; Prust, J.O.; Edwards, H.H.

    1985-10-01

    The Importance Sampling Scheme is designed to concentrate sampling in the high dose region of the parameter space. A sensitivity analysis of an intitial case study is used to roughly define the high dose and risk region of the parameter space. By applying modified distribution to the individual parameter ranges it was possible to concentrate sampling in regions of the parameter range that lead to high doses and risks. Comparison of risk estimates and cumulative distribution functions of dose for an increasing number of runs of the SYVAC model indicated that the risk estimate had converged at 1200 Importance Sampling runs. Examination of a plot of risk in various dose bands supported this conclusion. It was clear that the random sampling had not achieved convergence at 400 runs. (author)

  14. The biochemical estimation of age in Euphausiids: Laboratory calibration and field comparisons

    Science.gov (United States)

    Harvey, H. R.; Ju, Se-J.; Son, S.-K.; Feinberg, L. R.; Shaw, C. T.; Peterson, W. T.

    2010-04-01

    Euphausiids play a key role in many marine ecosystems as a link between primary producers and top predators. Understanding their demographic (i.e. age) structure is an essential tool to assess growth and recruitment as well as to determine how changes in environmental conditions might alter their condition and distribution. Age determination of crustaceans cannot be accomplished using traditional approaches, and here we evaluate the potential for biochemical products of tissue metabolism (termed lipofuscins) to determine the demographic structure of euphausiids in field collections . Lipofuscin was extracted from krill neural tissues (eye and eye-stalk), quantified using fluorescent intensity and normalized to tissue protein content to allow comparisons across animal sizes. Multiple fluorescent components from krill were observed, with the major product having a maximum fluorescence at excitation of 355 nm and emission of 510 nm. Needed age calibration of lipofuscin accumulation in Euphausia pacifica was accomplished using known-age individuals hatched and reared in the laboratory for over one year. Lipofuscin content extracted from neural tissues of laboratory-reared animals was highly correlated with the chronological age of animals ( r=0.87). Calibrated with laboratory lipofuscin accumulation rates, field-collected sub-adult and adult E. pacifica in the Northeast Pacific were estimated to be older than 100 days and younger than 1year. Comparative data for the Antarctic krill, E. superba showed much higher lipofuscin values suggesting a much longer lifespan than the more temperate species, E. pacifica. These regional comparisons suggest that biochemical indices allow a practical approach to estimate population age structure of diverse populations, and combined with other measurements can provide estimates of vital rates (i.e. longevity, mortality, growth) for krill populations in dynamic environments.

  15. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  16. A NEW METHOD FOR NON DESTRUCTIVE ESTIMATION OF Jc IN YBaCuO CERAMIC SAMPLES

    Directory of Open Access Journals (Sweden)

    Giancarlo Cordeiro Costa

    2014-12-01

    Full Text Available This work presents a new method for estimation of Jc as a bulk characteristic of YBCO blocks. The experimental magnetic interaction force between a SmCo permanent magnet and a YBCO block was compared to finite element method (FEM simulations results, allowing us to search a best fitting value to the critical current of the superconducting sample. As FEM simulations were based on Bean model , the critical current density was taken as an unknown parameter. This is a non destructive estimation method. since there is no need of breaking even a little piece of the sample for analysis.

  17. Comparison of diagnostic efficacy between CLE, tissue sampling, and CLE combined with tissue sampling for undetermined pancreaticobiliary strictures: a meta-analysis.

    Science.gov (United States)

    Gao, Ya-Dong; Qu, Ya-Wei; Liu, Hai-Feng

    2018-04-01

    The accurate diagnosis of undetermined pancreaticobiliary strictures remains challenging. Current ERCP-guided tissue sampling methods are of low sensitivity. Confocal laser endomicroscopy (CLE) is a new procedure and allows real optical biopsies that may improve the diagnosis of undetermined pancreaticobiliary strictures. The aim of this meta-analysis was to determine the diagnostic yield of CLE, tissue sampling, and CLE combined with tissue sampling for undetermined pancreaticobiliary strictures. Pubmed, Embase, and the Cochrane Library database were reviewed for relevant studies. Pooled estimates of sensitivity and specificity with 95% confidence intervals (CIs) were calculated using the random-effects meta-analysis model. The summary receiver-operating characteristic (SROC) curve was constructed, and the area under the receiver operating characteristic curve (AUC) was calculated. Twelve studies involving 591 patients were enrolled in our analysis. The overall sensitivity and the specificity estimate of CLE for discriminating benign and malignant pancreaticobiliary strictures were 87% (95%CI, 83-91%) and 76% (95%CI, 70-81%), respectively. The AUC to assess the diagnostic efficacy was 0.8705. For tissue sampling, the overall sensitivity and the specificity estimate were 64% (95%CI, 57-70%) and 94% (95%CI, 90-97%), respectively. The AUC to assess the diagnostic efficacy was 0.8040. A combination of both methods increased the sensitivity (93%; 95%CI, 88-96%) with a specificity of 82% (95%CI, 74-89%). The AUC to assess the diagnostic efficacy was 0.9377. There was no publication bias by Deeks' Funnel Plot with p = .936. Compared with tissue sampling, CLE may increase the sensitivity for the diagnosis of malignant pancreaticobiliary strictures. A combination of both can effectively diagnose malignant pancreaticobiliary strictures.

  18. Comparison between SAR Soil Moisture Estimates and Hydrological Model Simulations over the Scrivia Test Site

    Directory of Open Access Journals (Sweden)

    Alberto Pistocchi

    2013-10-01

    Full Text Available In this paper, the results of a comparison between the soil moisture content (SMC estimated from C-band SAR, the SMC simulated by a hydrological model, and the SMC measured on ground are presented. The study was carried out in an agricultural test site located in North-west Italy, in the Scrivia river basin. The hydrological model used for the simulations consists of a one-layer soil water balance model, which was found to be able to partially reproduce the soil moisture variability, retaining at the same time simplicity and effectiveness in describing the topsoil. SMC estimates were derived from the application of a retrieval algorithm, based on an Artificial Neural Network approach, to a time series of ENVISAT/ASAR images acquired over the Scrivia test site. The core of the algorithm was represented by a set of ANNs able to deal with the different SAR configurations in terms of polarizations and available ancillary data. In case of crop covered soils, the effect of vegetation was accounted for using NDVI information, or, if available, for the cross-polarized channel. The algorithm results showed some ability in retrieving SMC with RMSE generally <0.04 m3/m3 and very low bias (i.e., <0.01 m3/m3, except for the case of VV polarized SAR images: in this case, the obtained RMSE was somewhat higher than 0.04 m3/m3 (≤0.058 m3/m3. The algorithm was implemented within the framework of an ESA project concerning the development of an operative algorithm for the SMC retrieval from Sentinel-1 data. The algorithm should take into account the GMES requirements of SMC accuracy (≤5% in volume, spatial resolution (≤1 km and timeliness (3 h from observation. The SMC estimated by the SAR algorithm, the SMC estimated by the hydrological model, and the SMC measured on ground were found to be in good agreement. The hydrological model simulations were performed at two soil depths: 30 and 5 cm and showed that the 30 cm simulations indicated, as expected, SMC

  19. An Improved Estimation of Regional Fractional Woody/Herbaceous Cover Using Combined Satellite Data and High-Quality Training Samples

    Directory of Open Access Journals (Sweden)

    Xu Liu

    2017-01-01

    Full Text Available Mapping vegetation cover is critical for understanding and monitoring ecosystem functions in semi-arid biomes. As existing estimates tend to underestimate the woody cover in areas with dry deciduous shrubland and woodland, we present an approach to improve the regional estimation of woody and herbaceous fractional cover in the East Asia steppe. This developed approach uses Random Forest models by combining multiple remote sensing data—training samples derived from high-resolution image in a tailored spatial sampling and model inputs composed of specific metrics from MODIS sensor and ancillary variables including topographic, bioclimatic, and land surface information. We emphasize that effective spatial sampling, high-quality classification, and adequate geospatial information are important prerequisites of establishing appropriate model inputs and achieving high-quality training samples. This study suggests that the optimal models improve estimation accuracy (NMSE 0.47 for woody and 0.64 for herbaceous plants and show a consistent agreement with field observations. Compared with existing woody estimate product, the proposed woody cover estimation can delineate regions with subshrubs and shrubs, showing an improved capability of capturing spatialized detail of vegetation signals. This approach can be applicable over sizable semi-arid areas such as temperate steppes, savannas, and prairies.

  20. SNP calling, genotype calling, and sample allele frequency estimation from new-generation sequencing data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    We present a statistical framework for estimation and application of sample allele frequency spectra from New-Generation Sequencing (NGS) data. In this method, we first estimate the allele frequency spectrum using maximum likelihood. In contrast to previous methods, the likelihood function is cal...... be extended to various other cases including cases with deviations from Hardy-Weinberg equilibrium. We evaluate the statistical properties of the methods using simulations and by application to a real data set....

  1. Cost-effective sampling of 137Cs-derived net soil redistribution: part 1 – estimating the spatial mean across scales of variation

    International Nuclear Information System (INIS)

    Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.

    2015-01-01

    The 137 Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many 137 Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of 137 Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954–2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate 137 Cs-derived net soil redistribution across scales of variation. - Highlights: • The 137 Cs technique estimates net time-integrated soil redistribution by all processes. • It is time-consuming and dominated by studies of individual fields. • We use limited resources to estimate soil

  2. Variations among animals when estimating the undegradable fraction of fiber in forage samples

    Directory of Open Access Journals (Sweden)

    Cláudia Batista Sampaio

    2014-10-01

    Full Text Available The objective of this study was to assess the variability among animals regarding the critical time to estimate the undegradable fraction of fiber (ct using an in situ incubation procedure. Five rumenfistulated Nellore steers were used to estimate the degradation profile of fiber. Animals were fed a standard diet with an 80:20 forage:concentrate ratio. Sugarcane, signal grass hay, corn silage and fresh elephant grass samples were assessed. Samples were put in F57 Ankom® bags and were incubated in the rumens of the animals for 0, 6, 12, 18, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 and 312 hours. The degradation profiles were interpreted using a mixed non-linear model in which a random effect was associated with the degradation rate. For sugarcane, signal grass hay and corn silage, there were no significant variations among animals regarding the fractional degradation rate of neutral and acid detergent fiber; consequently, the ct required to estimate the undegradable fiber fraction did not vary among animals for those forages. However, a significant variability among animals was found for the fresh elephant grass. The results seem to suggest that the variability among animals regarding the degradation rate of fibrous components can be significant.

  3. Estimation of uranium in bioassay samples of occupational workers by laser fluorimetry

    International Nuclear Information System (INIS)

    Suja, A.; Prabhu, S.P.; Sawant, P.D.; Sarkar, P.K.; Tiwari, A.K.; Sharma, R.

    2012-01-01

    A newly established uranium processing facility has been commissioned at BARC, Trombay. Monitoring of occupational workers is essential to assess intake of uranium in this facility. A group of 21 workers was selected for bioassay monitoring to assess the existing urinary excretion levels of uranium before the commencement of actual work. Bioassay samples collected from these workers were analyzed by ion-exchange technique followed by laser fluorimetry. Standard addition method was followed for estimation of uranium concentration in the samples. The minimum detectable activity by this technique is about 0.2 ng. The range of uranium observed in these samples varies from 19 to 132 ng/L. Few of these samples were also analyzed by fission track analysis technique and the results were found to be comparable to those obtained by laser fluorimetry. The urinary excretion rate observed for the individual can be regarded as a 'personal baseline' and will be treated as the existing level of uranium in urine for these workers at the facility. (author)

  4. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    Science.gov (United States)

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  5. Comparison of standard resampling methods for performance estimation of artificial neural network ensembles

    OpenAIRE

    Green, Michael; Ohlsson, Mattias

    2007-01-01

    Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampl...

  6. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  7. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  8. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  9. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  10. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  11. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    Science.gov (United States)

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data

  12. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Qualitative performance comparison of reactivity estimation between the extended Kalman filter technique and the inverse point kinetic method

    International Nuclear Information System (INIS)

    Shimazu, Y.; Rooijen, W.F.G. van

    2014-01-01

    Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out

  14. A comparison of two sampling approaches for assessing the urban forest canopy cover from aerial photography.

    Science.gov (United States)

    Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker

    2016-01-01

    Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...

  15. An econometric method for estimating population parameters from non-random samples: An application to clinical case finding.

    Science.gov (United States)

    Burger, Rulof P; McLaren, Zoë M

    2017-09-01

    The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Estimation of the deoxynivalenol and moisture contents of bulk wheat grain samples by FT-NIR spectroscopy

    Science.gov (United States)

    Deoxynivalenol (DON) levels in harvested grain samples are used to evaluate the Fusarium head blight (FHB) resistance of wheat cultivars and breeding lines. Fourier transform near-infrared (FT-NIR) calibrations were developed to estimate the DON and moisture content (MC) of bulk wheat grain samples ...

  17. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  18. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes

    International Nuclear Information System (INIS)

    Bachoc, Francois

    2014-01-01

    Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)

  19. Evaluating the reproducibility of environmental radioactivity monitoring data through replicate sample analysis

    International Nuclear Information System (INIS)

    Lindeken, C.L.; White, J.H.; Silver, W.J.

    1978-01-01

    At the Lawrence Livermore Laboratory, about 10% of the sampling effort in the environmental monitoring program represents replicate sample collection. Replication of field samples was initiated as part of the quality assurance program for environmental monitoring to determine the reproducibility of environmental measurements. In the laboratory these replicates are processed along with routine samples. As all components of variance are included in analysis of such field samples, comparison of the analytical data from replicate analyses provides a basis for estimating the overall reproducibility of the measurements. The replication study indicates that the reproducibility of environmental radioactivity monitoring data is subject to considerably more variability than is indicated by the accompanying counting errors. The data are also compared with analyses of duplicate aliquots from a well mixed sample or with duplicate aliquots of samples with known radionuclide content. These comparisons show that most of the variability is associated with the collection and preparation of the sample rather than with the analytical procedures

  20. Assessing NIR & MIR Spectral Analysis as a Method for Soil C Estimation Across a Network of Sampling Sites

    Science.gov (United States)

    Spencer, S.; Ogle, S.; Borch, T.; Rock, B.

    2008-12-01

    Monitoring soil C stocks is critical to assess the impact of future climate and land use change on carbon sinks and sources in agricultural lands. A benchmark network for soil carbon monitoring of stock changes is being designed for US agricultural lands with 3000-5000 sites anticipated and re-sampling on a 5- to10-year basis. Approximately 1000 sites would be sampled per year producing around 15,000 soil samples to be processed for total, organic, and inorganic carbon, as well as bulk density and nitrogen. Laboratory processing of soil samples is cost and time intensive, therefore we are testing the efficacy of using near-infrared (NIR) and mid-infrared (MIR) spectral methods for estimating soil carbon. As part of an initial implementation of national soil carbon monitoring, we collected over 1800 soil samples from 45 cropland sites in the mid-continental region of the U.S. Samples were processed using standard laboratory methods to determine the variables above. Carbon and nitrogen were determined by dry combustion and inorganic carbon was estimated with an acid-pressure test. 600 samples are being scanned using a bench- top NIR reflectance spectrometer (30 g of 2 mm oven-dried soil and 30 g of 8 mm air-dried soil) and 500 samples using a MIR Fourier-Transform Infrared Spectrometer (FTIR) with a DRIFT reflectance accessory (0.2 g oven-dried ground soil). Lab-measured carbon will be compared to spectrally-estimated carbon contents using Partial Least Squares (PLS) multivariate statistical approach. PLS attempts to develop a soil C predictive model that can then be used to estimate C in soil samples not lab-processed. The spectral analysis of soil samples either whole or partially processed can potentially save both funding resources and time to process samples. This is particularly relevant for the implementation of a national monitoring network for soil carbon. This poster will discuss our methods, initial results and potential for using NIR and MIR spectral

  1. Advancing the Use of Passive Sampling in Risk Assessment and Management of Sediments Contaminated with Hydrophobic Organic Chemicals: Results of an International Ex Situ Passive Sampling Interlaboratory Comparison

    Science.gov (United States)

    This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practica...

  2. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Science.gov (United States)

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  3. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  4. Yield estimation based on calculated comparisons to particle velocity data recorded at low stress

    International Nuclear Information System (INIS)

    Rambo, J.

    1993-01-01

    This paper deals with the problem of optimizing the yield estimation process if some of the material properties are known from geophysical measurements and others are inferred from in-situ dynamic measurements. The material models and 2-D simulations of the event are combined to determine the yield. Other methods of yield determination from peak particle velocity data have mostly been based on comparisons of nearby events in similar media at NTS. These methods are largely empirical and are subject to additional error when a new event has different properties than the population being used for a basis of comparison. The effect of material variations can be examined using LLNL's KDYNA computer code. The data from an NTS event provide the instructive example for simulation

  5. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  6. Efficiency comparisons of fish sampling gears for a lentic ecosystem health assessments in Korea

    Directory of Open Access Journals (Sweden)

    Jeong-Ho Han

    2016-12-01

    Full Text Available The key objective of this study was to analyze the sampling efficiency of various fish sampling gears for a lentic ecosystem health assessment. A fish survey for the lentic ecosystem health assessment model was sampled twice from 30 reservoirs during 2008–2012. During the study, fishes of 81 species comprising 53,792 individuals were sampled from 30 reservoirs. A comparison of sampling gears showed that casting nets were the best sampling gear with high species richness (69 species, whereas minnow traps were the worst gear with low richness (16 species. Fish sampling efficiency, based on the number of individual catch per unit effort, was best in fyke nets (28,028 individuals and worst in minnow traps (352 individuals. When we compared trammel nets and kick nets versus fyke nets and casting nets, the former were useful in terms of the number of fish individuals but not in terms of the number of fish species.

  7. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  8. Advantage of multiple spot urine collections for estimating daily sodium excretion: comparison with two 24-h urine collections as reference.

    Science.gov (United States)

    Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi

    2016-02-01

    Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62  mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.

  9. Sampling for radionuclides and other trace substances

    International Nuclear Information System (INIS)

    Eberhardt, L.L.

    1976-01-01

    Various problems with the environment and an energy crisis have resulted in considerable emphasis on the analysis and understanding of natural systems. The present generation of ecological models suffers greatly from a lack of attention to use of accurate and efficient sampling methods in obtaining the data on which these models are based. Improving ecological sampling requires first of all that the objectives be clearly defined, since different schemes are required for sampling for totals, for changes over time and space, to determine hazards, or for estimating parameters in models. The frequency distributions of most ecological contaminants are not normal, but seem instead to follow a skewed distribution. Coefficients of variation appear to be relatively constant and typical values may range from 0.1 to 1.0 depending on the substance and circumstances. These typical values may be very useful in designing a sampling plan, either for fixed relative variance, or in terms of the sensitivity of a comparison. Several classes of sampling methods are available for particular kinds of objectives. The notion of optimal sampling for parameter estimates is new to ecology, but may possibly be adapted from work done in industrial experimentation to provide a rationale for sampling in time

  10. Evaluation of NAA laboratory results in inter-comparison on determination of trace elements in food and environmental samples

    International Nuclear Information System (INIS)

    Diah Dwiana Lestiani; Syukria Kurniawati; Natalia Adventini

    2012-01-01

    Inter-comparison program is a good tool for improving quality and to enhance the accuracy and precision of the analytical techniques. By participating in this program, laboratories could demonstrate their capability and ensuring the quality of analysis results generated by analytical laboratories. The Neutron Activation Analysis (NAA) laboratory at National Nuclear Energy Agency of Indonesia (BATAN), Nuclear Technology Center for Materials and Radiometry-PTNBR laboratory participated in inter-comparison tests organized by NAA working group. Inter-comparison BATAN 2009 was the third inter-laboratory analysis test within that project. The participating laboratories were asked to analyze for trace elements using neutron activation analysis as the primary technique. Three materials were distributed to the participants representing foodstuff, and environmental material samples. Samples were irradiated in rabbit facility of G.A. Siwabessy reactor with neutron flux ~ 10 13 n.cm -2 .s -1 , and counted with HPGe detector of gamma spectrometry. Several trace elements in these samples were detected. The accuracy and precision evaluation based on International Atomic Energy Agency (IAEA) criteria was applied. In this paper the PTNBR NAA laboratory results is evaluated. (author)

  11. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  12. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    Science.gov (United States)

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  13. Comparison of Pre-Analytical FFPE Sample Preparation Methods and Their Impact on Massively Parallel Sequencing in Routine Diagnostics

    Science.gov (United States)

    Heydt, Carina; Fassunke, Jana; Künstlinger, Helen; Ihle, Michaela Angelika; König, Katharina; Heukamp, Lukas Carl; Schildhaus, Hans-Ulrich; Odenthal, Margarete; Büttner, Reinhard; Merkelbach-Bruse, Sabine

    2014-01-01

    Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE) material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany) seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3–24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can be used for

  14. Comparison of pre-analytical FFPE sample preparation methods and their impact on massively parallel sequencing in routine diagnostics.

    Directory of Open Access Journals (Sweden)

    Carina Heydt

    Full Text Available Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3-24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA. No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can

  15. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2005-12-01

    Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.

  16. A new fractionator principle with varying sampling fractions: exemplified by estimation of synapse number using electron microscopy

    DEFF Research Database (Denmark)

    Witgen, Brent Marvin; Grady, M. Sean; Nyengaard, Jens Randel

    2006-01-01

    The quantification of ultrastructure has been permanently improved by the application of new stereological principles. Both precision and efficiency have been enhanced. Here we report for the first time a fractionator method that can be applied at the electron microscopy level. This new design...... the total object number using section sampling fractions based on the average thickness of sections of variable thicknesses. As an alternative, this approach estimates the correct particle section sampling probability based on an estimator of the Horvitz-Thompson type, resulting in a theoretically more...

  17. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  18. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    2016-01-01

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independen...... of the methods often (but not always) varies with the features of the data generating process....

  19. Comparison of leach results from field and laboratory prepared samples

    International Nuclear Information System (INIS)

    Oblath, S.B.; Langton, C.A.

    1985-01-01

    The leach behavior of saltstone prepared in the laboratory agrees well with that from samples mixed in the field using the Littleford mixer. Leach rates of nitrates and cesium from the current reference formulation saltstone were compared. The laboratory samples were prepared using simulated salt solution; those in the field used Tank 50 decontaminated supernate. For both nitrate and cesium, the field and laboratory samples showed nearly identical leach rates for the first 30 to 50 days. For the remaining period of the test, the field samples showed higher leach rates with the maximum difference being less than a factor of three. Ruthenium and antimony were present in the Tank 50 supernate in known amounts. Antimony-125 was observed in the leachate and a fractional leach rate was calculated to be at least a factor of ten less than that of 137 Cs. No 106 Ru was observed in the leachate, and the release rate was not calculated. However, based on the detection limits for the analysis, the ruthenium leach rate must also be at least a factor of ten less than cesium. These data are the first measurements of the leach rates of Ru and Sb from saltstone. The nitrate leach rates for these samples were 5 x 10 -5 grams of nitrate per square cm per day after 100 days for the laboratory samples and after 200 days for the field samples. These values are consistent with the previously measured leach rates for reference formulation saltstone. The relative standard deviation in the leach rate is about 15% for the field samples, which all were produced from one batch of saltstone, and about 35% for the laboratory samples, which came from different batches. These are the first recorded estimates of the error in leach rates for saltstone

  20. Comparison of mobile and stationary spore-sampling techniques for estimating virulence frequencies in aerial barley powdery mildew populations

    DEFF Research Database (Denmark)

    Hovmøller, M.S.; Munk, L.; Østergård, Hanne

    1995-01-01

    Gene frequencies in samples of aerial populations of barley powdery mildew (Erysiphe graminis f.sp. hordei), which were collected in adjacent barley areas and in successive periods of time, were compared using mobile and stationary sampling techniques. Stationary samples were collected from trap ...

  1. Comparison of Two Methods for Estimation of Work Limitation Scores from Health Status Measures

    DEFF Research Database (Denmark)

    Anatchkova, M; Fang, H; Kini, N

    2015-01-01

    Objectives To compare two methods for estimation of Work Limitations Questionnaire scores (WLQ, 8 items) from the Role Physical (RP, 4 items) and Role Emotional scales (RE, 3 items) of the SF-36 Health survey. These measures assess limitations in role performance attributed to health (emotional...... future data collection strategies. Methods We used data from two independent cross-sectional panel samples (Sample1, n=1382, 51% female, 72% Caucasian, 49% with preselected chronic conditions, 15% with fair/poor health; Sample2, n=301, 45% female, 90% Caucasian, 47% with preselected chronic conditions......, 21% with fair/poor health). Method 1 used previously developed and validated IRT based calibration tables. Method 2 used regression models to develop aggregate imputation weights as described in the literature. We evaluated the agreement of observed and estimated WLQ scale scores from the two methods...

  2. A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes

    Science.gov (United States)

    Bundy, Brian; Krischer, Jeffrey P.

    2016-01-01

    The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448

  3. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  4. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  5. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-UO2 configuration.

    Energy Technology Data Exchange (ETDEWEB)

    Klann, R. T.; Perret, G.; Nuclear Engineering Division

    2007-10-03

    An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R1-UO2 core configuration were completed. The reactor model was generated using the REBUS code developed at Argonne National Laboratory. The calculations are based on the specifications for fabrication, so they are considered preliminary until sampling and analysis have been completed on the fabricated samples. The estimates indicate a range of reactivity effect from -22 pcm to +25 pcm compared to the natural U sample.

  6. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    Science.gov (United States)

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Regional cerebral blood flow measurements by a noninvasive microsphere method using 123I-IMP. Comparison with the modified fractional uptake method and the continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Nakano, Seigo; Matsuda, Hiroshi; Tanizaki, Hiroshi; Ogawa, Masafumi; Miyazaki, Yoshiharu; Yonekura, Yoshiharu

    1998-01-01

    A noninvasive microsphere method using N-isopropyl-p-( 123 I)iodoamphetamine ( 123 I-IMP), developed by Yonekura et al., was performed in 10 patients with neurological diseases to quantify regional cerebral blood flow (rCBF). Regional CBF values by this method were compared with rCBF values simultaneously estimated from both the modified fractional uptake (FU) method using cardiac output developed by Miyazaki et al. and the conventional method with continuous arterial blood sampling. In comparison, we designated the factor which converted raw SPECT voxel counts to rCBF values as a CBF factor. A highly significant correlation (r=0.962, p<0.001) was obtained in the CBF factors between the present method and the continuous arterial blood sampling method. The CBF factors by the present method were only 2.7% higher on the average than those by the continuous arterial blood sampling method. There were significant correlation (r=0.811 and r=O.798, p<0.001) in the CBF factor between modified FU method (threshold for estimating total brain SPECT counts; 10% and 30% respectively) and the continuous arterial blood sampling method. However, the CBF factors of the modified FU method showed 31.4% and 62.3% higher on the average (threshold; 10% and 30% respectively) than those by the continuous arterial blood sampling method. In conclusion, this newly developed method for rCBF measurements was considered to be useful for routine clinical studies without any blood sampling. (author)

  8. Estimating the tritium input to groundwater from wine samples: Groundwater and direct run-off contribution to Central European surface waters

    International Nuclear Information System (INIS)

    Roether, W.

    1967-01-01

    A model is derived which allows a quantitative evaluation of wine tritium data. It is shown that the tritium content of a wine sample is not determined exclusively by water taken up by the roots, but is also influenced to a large extent by direct exchange with atmospheric moisture. The soil-water fraction amounts normally to not more than 40%. Thus, wine is a sample partly of atmospheric moisture at ground level, partly of soil moisture, integrated over a period around three weeks before vintage. The tritium content of two sets of wine samples originating from two selected sites in the Federal Republic of Germany and dating back to 1949 is reported. For the period since records of the tritium content of rain in Europe have become available comparisons of wine tritium with reported tritium activities of rain are in favour of the model outlined. The first distinguishable influence of bomb tritium shows up in the 1953 wine, whilst no detectable response to Castle tritium is found in 1954. By comparison with recorded rain activities at Ottawa, Canada, it is concluded that Castle influenced the tritium fall-out in Central Europe much less than it did at Ottawa. For the period before 1955 the tritium activity of the annual groundwater recharge, including pre-thermonuclear recharge in Central Europe, is estimated from the wine data. An estimation of the total assimilation of pre-thermonuclear tritium into the ocean at 50 degrees N is also given, which points to a value of 1-1.5 atoms/cm 2 s. It is shown that in further uses of pre-thermonuclear wines the possibility that samples have been contaminated by penetration of thermonuclear tritium through the bottle seals must be considered. The estimates of the tritium activities of groundwater recharge are based on the fact that in our climate the main contribution to groundwater is made up by autumn and winter precipitation. Because of this correlation with season the groundwater recharge is much lower in tritium than the

  9. A method for estimating the relative degree of saponification of xanthophyll sources and feedstuffs.

    Science.gov (United States)

    Fletcher, D L

    2006-05-01

    Saponification of xanthophyll esters in various feed sources has been shown to improve pigmentation efficiency in broiler skin and egg yolks. Three trials were conducted to evaluate a rapid liquid chromatography procedure for estimating the relative degree of xanthophyll saponification using samples of yellow corn, corn gluten meal, alfalfa, and 6 commercially available marigold meal concentrates. In each trial, samples were extracted using a modification of the 1984 Association of Official Analytical Chemists hot saponification procedure with and without the addition of KOH. A comparison of the chromatography results was used to estimate percent saponification of the original sample by dividing the nonsaponified extraction values by the saponified extraction values. A comparison of the percent saponified xanthophylls for each product (mg/kg) was: yellow corn, 101; corn gluten meal, 78; alfalfa, 97.9; and marigold concentrates A through F, 99.8, 4.6, 99.0, 95.6, 96.8, and 6.6, respectively. These results indicate that a modification of the 1984 Association of Official Analytical Chemists procedure and liquid column chromatography can be used to quickly verify saponification and can be used to estimate the relative degree of saponification of an unknown xanthophyll source.

  10. An improved parameter estimation and comparison for soft tissue constitutive models containing an exponential function.

    Science.gov (United States)

    Aggarwal, Ankush

    2017-08-01

    Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.

  11. Further observations on comparison of immunization coverage by lot quality assurance sampling and 30 cluster sampling.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-06-01

    Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.

  12. Tritium inventory differences: I. Sampling and U-getter pump holdup

    International Nuclear Information System (INIS)

    Ellefson, R.E.; Gill, J.T.

    1986-01-01

    Inventory differences (ID) in tritium material balance accounts (MBA) can occur with unmeasured transfers from the process or unmeasured holdup in the system. Small but cumulatively significant quantities of tritium can leave the MBA by normal capillary sampling of process gas operation. A predictor model for estimating the quantity of tritium leaving the MBA by sampling has been developed and implemented. The model calculates the gas transferred per sample; using the tritium concentration in the process and the number of samples, a quantity of tritium transferred is predicted. Verification of the model is made by PVT measurement of process transfer from multiple samplings. Comparison of predicted sample transfers with IDs from several MBA's reveals that sampling typically represents 50% of unmeasured transfers for regularly sampled processes

  13. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...

  14. Assessing representativeness of sampling methods for reaching men who have sex with men: a direct comparison of results obtained from convenience and probability samples.

    Science.gov (United States)

    Schwarcz, Sandra; Spindler, Hilary; Scheer, Susan; Valleroy, Linda; Lansky, Amy

    2007-07-01

    Convenience samples are used to determine HIV-related behaviors among men who have sex with men (MSM) without measuring the extent to which the results are representative of the broader MSM population. We compared results from a cross-sectional survey of MSM recruited from gay bars between June and October 2001 to a random digit dial telephone survey conducted between June 2002 and January 2003. The men in the probability sample were older, better educated, and had higher incomes than men in the convenience sample, the convenience sample enrolled more employed men and men of color. Substance use around the time of sex was higher in the convenience sample but other sexual behaviors were similar. HIV testing was common among men in both samples. Periodic validation, through comparison of data collected by different sampling methods, may be useful when relying on survey data for program and policy development.

  15. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  16. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  17. Elemental distribution and sample integrity comparison of freeze-dried and frozen-hydrated biological tissue samples with nuclear microprobe

    Energy Technology Data Exchange (ETDEWEB)

    Vavpetič, P., E-mail: primoz.vavpetic@ijs.si [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Vogel-Mikuš, K. [Biotechnical Faculty, Department of Biology, University of Ljubljana, Jamnikarjeva 101, SI-1000 Ljubljana (Slovenia); Jeromel, L. [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Ogrinc Potočnik, N. [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); FOM-Institute AMOLF, Science Park 104, 1098 XG Amsterdam (Netherlands); Pongrac, P. [Biotechnical Faculty, Department of Biology, University of Ljubljana, Jamnikarjeva 101, SI-1000 Ljubljana (Slovenia); Department of Plant Physiology, University of Bayreuth, Universitätstr. 30, 95447 Bayreuth (Germany); Drobne, D.; Pipan Tkalec, Ž.; Novak, S.; Kos, M.; Koren, Š.; Regvar, M. [Biotechnical Faculty, Department of Biology, University of Ljubljana, Jamnikarjeva 101, SI-1000 Ljubljana (Slovenia); Pelicon, P. [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia)

    2015-04-01

    The analysis of biological samples in frozen-hydrated state with micro-PIXE technique at Jožef Stefan Institute (JSI) nuclear microprobe has matured to a point that enables us to measure and examine frozen tissue samples routinely as a standard research method. Cryotome-cut slice of frozen-hydrated biological sample is mounted between two thin foils and positioned on the sample holder. The temperature of the cold stage in the measuring chamber is kept below 130 K throughout the insertion of the samples and the proton beam exposure. Matrix composition of frozen-hydrated tissue is consisted mostly of ice. Sample deterioration during proton beam exposure is monitored during the experiment, as both Elastic Backscattering Spectrometry (EBS) and Scanning Transmission Ion Microscopy (STIM) in on–off axis geometry are recorded together with the events in two PIXE detectors and backscattered ions from the chopper in a single list-mode file. The aim of this experiment was to determine differences and similarities between two kinds of biological sample preparation techniques for micro-PIXE analysis, namely freeze-drying and frozen-hydrated sample preparation in order to evaluate the improvements in the elemental localisation of the latter technique if any. In the presented work, a standard micro-PIXE configuration for tissue mapping at JSI was used with five detection systems operating in parallel, with proton beam cross section of 1.0 × 1.0 μm{sup 2} and a beam current of 100 pA. The comparison of the resulting elemental distributions measured at the biological tissue prepared in the frozen-hydrated and in the freeze-dried state revealed differences in elemental distribution of particular elements at the cellular level due to the morphology alteration in particular tissue compartments induced either by water removal in the lyophilisation process or by unsatisfactory preparation of samples for cutting and mounting during the shock-freezing phase of sample preparation.

  18. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron......), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release...

  19. Comparison of emissions from on-road sources using a mobile laboratory under various driving and operational sampling modes

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available Mobile sources produce a significant fraction of the total anthropogenic emissions burden in large cities and have harmful effects on air quality at multiple spatial scales. Mobile emissions are intrinsically difficult to estimate due to the large number of parameters affecting the emissions variability within and across vehicles types. The MCMA-2003 Campaign in Mexico City has showed the utility of using a mobile laboratory to sample and characterize specific classes of motor vehicles to better quantify their emissions characteristics as a function of their driving cycles. The technique clearly identifies "high emitter" vehicles via individual exhaust plumes, and also provides fleet average emission rates. We have applied this technique to Mexicali during the Border Ozone Reduction and Air Quality Improvement Program (BORAQIP for the Mexicali-Imperial Valley in 2005. We analyze the variability of measured emission ratios for emitted NOx, CO, specific VOCs, NH3, and some primary fine particle components and properties by deploying a mobile laboratory in roadside stationary sampling, chase and fleet average operational sampling modes. The measurements reflect various driving modes characteristic of the urban fleets. The observed variability for all measured gases and particle emission ratios is greater for the chase and roadside stationary sampling than for fleet average measurements. The fleet average sampling mode captured the effects of traffic conditions on the measured on-road emission ratios, allowing the use of fuel-based emission ratios to assess the validity of traditional "bottom-up" emissions inventories. Using the measured on-road emission ratios, we estimate CO and NOx mobile emissions of 175±62 and 10.4±1.3 metric tons/day, respectively, for the gasoline vehicle fleet in Mexicali. Comparisons with similar on-road emissions data from Mexico City indicated that fleet average NO emission ratios were

  20. A comparison of two measures of HIV diversity in multi-assay algorithms for HIV incidence estimation.

    Directory of Open Access Journals (Sweden)

    Matthew M Cousins

    Full Text Available Multi-assay algorithms (MAAs can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence.Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay, HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region. Samples were classified as MAA positive (likely from individuals with recent HIV infection if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1 the proportion of samples classified as MAA positive as a function of duration of infection, (2 the mean window period, (3 the shadow (the time period before sample collection that is being assessed by the MAA, and (4 the accuracy of cross-sectional incidence estimates for three cohort studies.The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were <1 year. Both MAAs provided cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion.MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation.

  1. A Comparison of Two Measures of HIV Diversity in Multi-Assay Algorithms for HIV Incidence Estimation

    Science.gov (United States)

    Cousins, Matthew M.; Konikoff, Jacob; Sabin, Devin; Khaki, Leila; Longosz, Andrew F.; Laeyendecker, Oliver; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Kobin, Beryl A.; Wheeler, Darrell; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Brookmeyer, Ron; Eshleman, Susan H.

    2014-01-01

    Background Multi-assay algorithms (MAAs) can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence. Methods Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay), HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM) diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region) or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region). Samples were classified as MAA positive (likely from individuals with recent HIV infection) if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1) the proportion of samples classified as MAA positive as a function of duration of infection, (2) the mean window period, (3) the shadow (the time period before sample collection that is being assessed by the MAA), and (4) the accuracy of cross-sectional incidence estimates for three cohort studies. Results The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion. Conclusions MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation. PMID:24968135

  2. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  3. Comparison of PIXE and XRF analysis of airborne particulate matter samples collected on Teflon and quartz fibre filters

    Science.gov (United States)

    Chiari, M.; Yubero, E.; Calzolai, G.; Lucarelli, F.; Crespo, J.; Galindo, N.; Nicolás, J. F.; Giannoni, M.; Nava, S.

    2018-02-01

    Within the framework of research projects focusing on the sampling and analysis of airborne particulate matter, Particle Induced X-ray Emission (PIXE) and Energy Dispersive X-ray Fluorescence (ED-XRF) techniques are routinely used in many laboratories throughout the world to determine the elemental concentration of the particulate matter samples. In this work an inter-laboratory comparison of the results obtained from analysing several samples (collected on both Teflon and quartz fibre filters) using both techniques is presented. The samples were analysed by PIXE (in Florence, at the 3 MV Tandetron accelerator of INFN-LABEC laboratory) and by XRF (in Elche, using the ARL Quant'X EDXRF spectrometer with specific conditions optimized for specific groups of elements). The results from the two sets of measurements are in good agreement for all the analysed samples, thus validating the use of the ARL Quant'X EDXRF spectrometer and the selected measurement protocol for the analysis of aerosol samples. Moreover, thanks to the comparison of PIXE and XRF results on Teflon and quartz fibre filters, possible self-absorption effects due to the penetration of the aerosol particles inside the quartz fibre-filters were quantified.

  4. Comparison of different methods to estimate BMR in adoloscent student population.

    Science.gov (United States)

    Patil, Suchitra R; Bharadwaj, Jyoti

    2011-01-01

    There is a growing clinical emphasis for the measurement of BMR and energy expenditure in clinical and research investigation such as obesity, exercise, cancer, under-nutrition, trauma and infections. Hence, there is a motivation towards calculating basal metabolic rate using standard equations. The objective of the present work is to identify an appropriate equation in Indian environment for the estimation of calorie needs and basal metabolic rate using the measured height, weight, age and skin fold parameters of an individual. Basal metabolic rates of adolescent male and female population aged between 17-20 years were estimated using equations proposed by FAO, ICMR, Cunningham, Harris Benedict, Fredrix and Miffin. Calorie needs were calculated using factorial approach which involves the multiplication of basal metabolic rate with appropriate physical activity factor. Basal metabolic rates estimated by FAO, Cunningham, Harris-Benedict, Fredrix and Miffin are reduced by 5%. These reduced basal metabolic rates and calorie needs are compared with that obtained by Cunningham's equation which is considered as accurate equation. Comparison of the basal metabolic rates and calorie needs obtained by Cunningham equation with all equations such as Harris-Benedict, FAO, Fredrix and Miffin after 5% reduction and ICMR equation without reduction indicates that Harris-Benedict, Fredrix, Miffin and FAO equations can be used for male and female adolescent populations for Indian environment. In conclusion, Harris-Benedict equation is an appropriate equation for both male and female adolescent population for Indian environment.

  5. Dispersal kernel estimation: A comparison of empirical and modelled particle dispersion in a coastal marine system

    Science.gov (United States)

    Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.

    2013-11-01

    Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic

  6. Comparison of estimated and background subsidence rates in Texas-Louisiana geopressured geothermal areas

    Energy Technology Data Exchange (ETDEWEB)

    Lee, L.M.; Clayton, M.; Everingham, J.; Harding, R.C.; Massa, A.

    1982-06-01

    A comparison of background and potential geopressured geothermal development-related subsidence rates is given. Estimated potential geopressured-related rates at six prospects are presented. The effect of subsidence on the Texas-Louisiana Gulf Coast is examined including the various associated ground movements and the possible effects of these ground movements on surficial processes. The relationships between ecosystems and subsidence, including the capability of geologic and biologic systems to adapt to subsidence, are analyzed. The actual potential for environmental impact caused by potential geopressured-related subsidence at each of four prospects is addressed. (MHR)

  7. Comparisons of PGA and INAA in the analyses of meteorite samples

    International Nuclear Information System (INIS)

    Wee Boon Siong; Ebihara, M.; Abdul Khalik Wood

    2010-01-01

    Prompt gamma-ray analysis (PGA) and instrumental neutron activation analysis (INAA) are suitable methods for multi-elemental determinations in various samples. These two methods are complementary because PGA is capable of analyzing most major and minor elements in rock samples whereas INAA is more superior in determining minor and trace elements. Both PGA and INAA are essential for the study of rare samples such as meteorites because of non-destructivity and relatively being free from contaminations. Samples for PGA can be reused for INAA, which help to reduce the sample usage. This project aims to utilize PGA and INAA techniques for comparative study and apply them to meteorites. In this study, 11 meteorite samples received from the Meteorite Working Group of NASA were analyzed. The Allende meteorite powder was included as quality control material. Results from PGA and INAA for Allende showed good agreement with literature values, signifying the reliabilities of these two methods. Elements Al, Ca, Mg, Mn, Na and Ti were determined by both methods and their results are compared. Comparison of PGA and INAA data using linear regression analysis showed correlations coefficients r 2 > 0.90 for Al, Ca, Mn and Ti, 0.85 for Mg, and 0.38 for Na. The PGA results for Na using 472 keV were less accurate due to the interference from the broad B peak. Therefore, Na results from INAA method are preferred. For other elements (Al, Ca, Mg, Mn and Ti), PGA and INAA results can be used as cross-reference for consistency. The PGA and INAA techniques have been applied to meteorite samples and results are comparable to literature values compiled from previously analyzed meteorites. In summary, both PGA and INAA methods give reasonably good agreement and are indispensable in the study of meteorites. (author)

  8. Comparison of three methods for the estimation of cross-shock electric potential using Cluster data

    Directory of Open Access Journals (Sweden)

    A. P. Dimmock

    2011-05-01

    Full Text Available Cluster four point measurements provide a comprehensive dataset for the separation of temporal and spatial variations, which is crucial for the calculation of the cross shock electrostatic potential using electric field measurements. While Cluster is probably the most suited among present and past spacecraft missions to provide such a separation at the terrestrial bow shock, it is far from ideal for a study of the cross shock potential, since only 2 components of the electric field are measured in the spacecraft spin plane. The present paper is devoted to the comparison of 3 different techniques that can be used to estimate the potential with this limitation. The first technique is the estimate taking only into account the projection of the measured components onto the shock normal. The second uses the ideal MHD condition E·B = 0 to estimate the third electric field component. The last method is based on the structure of the electric field in the Normal Incidence Frame (NIF for which only the potential component along the shock normal and the motional electric field exist. All 3 approaches are used to estimate the potential for a single crossing of the terrestrial bow shock that took place on the 31 March 2001. Surprisingly all three methods lead to the same order of magnitude for the cross shock potential. It is argued that the third method must lead to more reliable results. The effect of the shock normal inaccuracy is investigated for this particular shock crossing. The resulting electrostatic potential appears too high in comparison with the theoretical results for low Mach number shocks. This shows the variability of the potential, interpreted in the frame of the non-stationary shock model.

  9. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  10. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  11. Estimating evolutionary rates using time-structured data: a general comparison of phylogenetic methods.

    Science.gov (United States)

    Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W

    2016-11-15

    In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  13. Comparison of generalized estimating equations and quadratic inference functions using data from the National Longitudinal Survey of Children and Youth (NLSCY database

    Directory of Open Access Journals (Sweden)

    Browne Dillon

    2008-05-01

    Full Text Available Abstract Background The generalized estimating equations (GEE technique is often used in longitudinal data modeling, where investigators are interested in population-averaged effects of covariates on responses of interest. GEE involves specifying a model relating covariates to outcomes and a plausible correlation structure between responses at different time periods. While GEE parameter estimates are consistent irrespective of the true underlying correlation structure, the method has some limitations that include challenges with model selection due to lack of absolute goodness-of-fit tests to aid comparisons among several plausible models. The quadratic inference functions (QIF method extends the capabilities of GEE, while also addressing some GEE limitations. Methods We conducted a comparative study between GEE and QIF via an illustrative example, using data from the "National Longitudinal Survey of Children and Youth (NLSCY" database. The NLSCY dataset consists of long-term, population based survey data collected since 1994, and is designed to evaluate the determinants of developmental outcomes in Canadian children. We modeled the relationship between hyperactivity-inattention and gender, age, family functioning, maternal depression symptoms, household income adequacy, maternal immigration status and maternal educational level using GEE and QIF. Basis for comparison include: (1 ease of model selection; (2 sensitivity of results to different working correlation matrices; and (3 efficiency of parameter estimates. Results The sample included 795, 858 respondents (50.3% male; 12% immigrant; 6% from dysfunctional families. QIF analysis reveals that gender (male (odds ratio [OR] = 1.73; 95% confidence interval [CI] = 1.10 to 2.71, family dysfunctional (OR = 2.84, 95% CI of 1.58 to 5.11, and maternal depression (OR = 2.49, 95% CI of 1.60 to 2.60 are significantly associated with higher odds of hyperactivity-inattention. The results remained robust

  14. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics.

    Science.gov (United States)

    Madi, Mahmoud K; Karameh, Fadi N

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate

  16. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics

    Science.gov (United States)

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate

  17. A model-based approach to sample size estimation in recent onset type 1 diabetes.

    Science.gov (United States)

    Bundy, Brian N; Krischer, Jeffrey P

    2016-11-01

    The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Convenience Sampling of Children Presenting to Hospital-Based Outpatient Clinics to Estimate Childhood Obesity Levels in Local Surroundings.

    Science.gov (United States)

    Gilliland, Jason; Clark, Andrew F; Kobrzynski, Marta; Filler, Guido

    2015-07-01

    Childhood obesity is a critical public health matter associated with numerous pediatric comorbidities. Local-level data are required to monitor obesity and to help administer prevention efforts when and where they are most needed. We hypothesized that samples of children visiting hospital clinics could provide representative local population estimates of childhood obesity using data from 2007 to 2013. Such data might provide more accurate, timely, and cost-effective obesity estimates than national surveys. Results revealed that our hospital-based sample could not serve as a population surrogate. Further research is needed to confirm this finding.

  19. Large sample NAA facility and methodology development

    International Nuclear Information System (INIS)

    Roth, C.; Gugiu, D.; Barbos, D.; Datcu, A.; Aioanei, L.; Dobrea, D.; Taroiu, I. E.; Bucsa, A.; Ghinescu, A.

    2013-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) facility has been developed at the TRIGA- Annular Core Pulsed Reactor (ACPR) operated by the Institute for Nuclear Research in Pitesti, Romania. The central irradiation cavity of the ACPR core can accommodate a large irradiation device. The ACPR neutron flux characteristics are well known and spectrum adjustment techniques have been successfully applied to enhance the thermal component of the neutron flux in the central irradiation cavity. An analysis methodology was developed by using the MCNP code in order to estimate counting efficiency and correction factors for the major perturbing phenomena. Test experiments, comparison with classical instrumental neutron activation analysis (INAA) methods and international inter-comparison exercise have been performed to validate the new methodology. (authors)

  20. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  1. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  2. Chorionic villus sampling and amniocentesis.

    Science.gov (United States)

    Brambati, Bruno; Tului, Lucia

    2005-04-01

    The advantages and disadvantages of common invasive methods for prenatal diagnosis are presented in light of new investigations. Several aspects of first-trimester chorionic villus sampling and mid-trimester amniocentesis remain controversial, especially fetal loss rate, feto-maternal complications, and the extension of both sampling methods to less traditional gestational ages (early amniocentesis, late chorionic villus sampling), all of which complicate genetic counseling. A recent randomized trial involving early amniocentesis and late chorionic villus sampling has confirmed previous studies, leading to the unquestionable conclusion that transabdominal chorionic villus sampling is safer. The old dispute over whether limb reduction defects are caused by chorionic villus sampling gains new vigor, with a paper suggesting that this technique has distinctive teratogenic effects. The large experience involving maternal and fetal complications following mid-trimester amniocentesis allows a better estimate of risk for comparison with chorionic villus sampling. Transabdominal chorionic villus sampling, which appears to be the gold standard sampling method for genetic investigations between 10 and 15 completed weeks, permits rapid diagnosis in high-risk cases detected by first-trimester screening of aneuploidies. Sampling efficiency and karyotyping reliability are as high as in mid-trimester amniocentesis with fewer complications, provided the operator has the required training, skill and experience.

  3. HOW TO ESTIMATE THE AMOUNT OF IMPORTANT CHARACTERISTICS MISSING IN A CONSUMERS SAMPLE BY USING BAYESIAN ESTIMATORS

    Directory of Open Access Journals (Sweden)

    Sueli A. Mingoti

    2001-06-01

    Full Text Available Consumers surveys are conducted very often by many companies with the main objective of obtaining information about the opinions the consumers have about a specific prototype, product or service. In many situations the goal is to identify the characteristics that are considered important by the consumers when taking the decision of buying or using the products or services. When the survey is performed some characteristics that are present in the consumers population might not be reported by those consumers in the observed sample. Therefore, some important characteristics of the product according to the consumers opinions could be missing in the observed sample. The main objective of this paper is to show how the amount of characteristics missing in the observed sample could be easily estimated by using some Bayesian estimators proposed by Mingoti & Meeden (1992 and Mingoti (1999. An example of application related to an automobile survey is presented.Pesquisas de mercado são conduzidas freqüentemente com o propósito de obter informações sobre a opinião dos consumidores em relação a produtos já existentes no mercado, protótipos, ou determinados tipos de serviços prestados pela empresa. Em muitas situações deseja-se identificar as características que são consideradas importantes pelos consumidores no que se refere à tomada de decisão de compra do produto ou de opção pelo serviço prestado pela empresa. Como as pesquisas são feitas com amostras de consumidores do mercado potencial, algumas características consideradas importantes pela população podem não estar representadas nas amostras. O objetivo deste artigo é mostrar como a quantidade de características presentes na população e que não estão representadas nas amostras, pode ser facilmente estimada através de estimadores Bayesianos propostos por Mingoti & Meeden (1992 e Mingoti (1999. Como ilustração apresentamos um exemplo de uma pesquisa de mercado sobre um

  4. Comparison of the usefulness of selected formulas for GFR estimation in patients with diagnosed chronic kidney disease

    Directory of Open Access Journals (Sweden)

    Paweł Wróbel

    2018-03-01

    Conclusions: CKD-EPI and abbreviated MDRD formulas have a similar usefulness in GFR value estimation in patients with diagnosed chronic kidney disease. Lower eGFR values achieved using abbreviated MDRD formula and CKD-EPI equation in comparison with Bjornsson’s formula may result in an increased number of patients diagnosed with CKD.

  5. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  6. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    Science.gov (United States)

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  7. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    Directory of Open Access Journals (Sweden)

    Manan Gupta

    Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates

  8. Estimating implied rates of discount in healthcare decision-making.

    Science.gov (United States)

    West, R R; McNabb, R; Thompson, A G H; Sheldon, T A; Grimley Evans, J

    2003-01-01

    To consider whether implied rates of discounting from the perspectives of individual and society differ, and whether implied rates of discounting in health differ from those implied in choices involving finance or "goods". The study comprised first a review of economics, health economics and social science literature and then an empirical estimate of implied rates of discounting in four fields: personal financial, personal health, public financial and public health, in representative samples of the public and of healthcare professionals. Samples were drawn in the former county and health authority district of South Glamorgan, Wales. The public sample was a representative random sample of men and women, aged over 18 years and drawn from electoral registers. The health professional sample was drawn at random with the cooperation of professional leads to include doctors, nurses, professions allied to medicine, public health, planners and administrators. The literature review revealed few empirical studies in representative samples of the population, few direct comparisons of public with private decision-making and few direct comparisons of health with financial discounting. Implied rates of discounting varied widely and studies suggested that discount rates are higher the smaller the value of the outcome and the shorter the period considered. The relationship between implied discount rates and personal attributes was mixed, possibly reflecting the limited nature of the samples. Although there were few direct comparisons, some studies found that individuals apply different rates of discount to social compared with private comparisons and health compared with financial. The present study also found a wide range of implied discount rates, with little systematic effect of age, gender, educational level or long-term illness. There was evidence, in both samples, that people chose a lower rate of discount in comparisons made on behalf of society than in comparisons made for

  9. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates

    International Nuclear Information System (INIS)

    Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered. - Highlights: ► National scale mercury assessments integrate small scale study results. ► Basin scale differences and representativeness of fluvial mercury samples are concerns. ► Wetland area, not basin size, predicts inter-basin methylmercury variability. ► Time-weighted methylmercury estimates improve the prediction of mercury in basin fish. - Fluvial methylmercury concentration correlates with wetland area not basin scale and time-weighted estimates better predict basin top predator mercury than discrete sample estimates.

  10. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    Science.gov (United States)

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods

  11. Comparison of active and passive sampling strategies for the monitoring of pesticide contamination in streams

    Science.gov (United States)

    Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina

    2014-05-01

    The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate

  12. Weighted Moments Estimators of the Parameters for the Extreme Value Distribution Based on the Multiply Type II Censored Sample

    Directory of Open Access Journals (Sweden)

    Jong-Wuu Wu

    2013-01-01

    Full Text Available We propose the weighted moments estimators (WMEs of the location and scale parameters for the extreme value distribution based on the multiply type II censored sample. Simulated mean squared errors (MSEs of best linear unbiased estimator (BLUE and exact MSEs of WMEs are compared to study the behavior of different estimation methods. The results show the best estimator among the WMEs and BLUE under different combinations of censoring schemes.

  13. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  14. Comparisons of methods for generating conditional Poisson samples and Sampford samples

    OpenAIRE

    Grafström, Anton

    2005-01-01

    Methods for conditional Poisson sampling (CP-sampling) and Sampford sampling are compared and the focus is on the efficiency of the methods. The efficiency is investigated by simulation in different sampling situations. It was of interest to compare methods since new methods for both CP-sampling and Sampford sampling were introduced by Bondesson, Traat & Lundqvist in 2004. The new methods are acceptance rejection methods that use the efficient Pareto sampling method. They are found to be ...

  15. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  16. Comparison of four sampling methods for the detection of Salmonella in broiler litter.

    Science.gov (United States)

    Buhr, R J; Richardson, L J; Cason, J A; Cox, N A; Fairchild, B D

    2007-01-01

    Experiments were conducted to compare litter sampling methods for the detection of Salmonella. In experiment 1, chicks were challenged orally with a suspension of naladixic acid-resistant Salmonella and wing banded, and additional nonchallenged chicks were placed into each of 2 challenge pens. Nonchallenged chicks were placed into each nonchallenge pen located adjacent to the challenge pens. At 7, 8, 10, and 11 wk of age the litter was sampled using 4 methods: fecal droppings, litter grab, drag swab, and sock. For the challenge pens, Salmonella-positive samples were detected in 3 of 16 fecal samples, 6 of 16 litter grab samples, 7 of 16 drag swabs samples, and 7 of 16 sock samples. Samples from the nonchallenge pens were Salmonella positive in 2 of 16 litter grab samples, 9 of 16 drag swab samples, and 9 of 16 sock samples. In experiment 2, chicks were challenged with Salmonella, and the litter in the challenge and adjacent nonchallenge pens were sampled at 4, 6, and 8 wk of age with broilers remaining in all pens. For the challenge pens, Salmonella was detected in 10 of 36 fecal samples, 20 of 36 litter grab samples, 14 of 36 drag swab samples, and 26 of 36 sock samples. Samples from the adjacent nonchallenge pens were positive for Salmonella in 6 of 36 fecal droppings samples, 4 of 36 litter grab samples, 7 of 36 drag swab samples, and 19 of 36 sock samples. Sock samples had the highest rates of Salmonella detection. In experiment 3, the litter from a Salmonella-challenged flock was sampled at 7, 8, and 9 wk by socks and drag swabs. In addition, comparisons with drag swabs that were stepped on during sampling were made. Both socks (24 of 36, 67%) and drag swabs that were stepped on (25 of 36, 69%) showed significantly more Salmonella-positive samples than the traditional drag swab method (16 of 36, 44%). Drag swabs that were stepped on had comparable Salmonella detection level to that for socks. Litter sampling methods that incorporate stepping on the sample

  17. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Directory of Open Access Journals (Sweden)

    Githure John I

    2009-09-01

    Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction

  18. REIMEP-22 inter-laboratory comparison: "U Age Dating - Determination of the production date of a uranium certified test sample"

    OpenAIRE

    VENCHIARUTTI CELIA; VARGA ZSOLT; RICHTER Stephan; JAKOPIC Rozle; MAYER Klaus; AREGBE Yetunde

    2015-01-01

    The REIMEP-22 inter-laboratory comparison aimed at determining the production date of a uranium certified test sample (i.e. the last chemical separation date of the material). Participants in REIMEP-22 on "U Age Dating - Determination of the production date of a uranium certified test sample" received one low-enriched 20 mg uranium sample for mass spectrometry measurements and/or one 50 mg uranium sample for D-spectrometry measurements, with an undisclosed value for the production date. They ...

  19. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    Science.gov (United States)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  20. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  1. Methods of protective measures analysis in agriculture on contaminated lands: estimation of effectiveness, intervention levels and comparison of different countermeasures

    International Nuclear Information System (INIS)

    Yatsalo, B.I.; Aleksakhin, R.M.

    1997-01-01

    Methodological aspects of the analysis of protective measures in agriculture in the long-term period of liquidation of the consequences of a nuclear accident are considered. Examples of the estimations of countermeasure effectiveness with the use of the cost-benefit analysis, as well as methods of the estimation of intervention levels and examples of a comparison protective measures with the use of several criteria of effectiveness are discussed

  2. Seasonal and temporal variation in release of antibiotics in hospital wastewater: estimation using continuous and grab sampling.

    Science.gov (United States)

    Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J

    2013-01-01

    The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the

  3. Sampling, feasibility, and priors in Bayesian estimation

    OpenAIRE

    Chorin, Alexandre J.; Lu, Fei; Miller, Robert N.; Morzfeld, Matthias; Tu, Xuemin

    2015-01-01

    Importance sampling algorithms are discussed in detail, with an emphasis on implicit sampling, and applied to data assimilation via particle filters. Implicit sampling makes it possible to use the data to find high-probability samples at relatively low cost, making the assimilation more efficient. A new analysis of the feasibility of data assimilation is presented, showing in detail why feasibility depends on the Frobenius norm of the covariance matrix of the noise and not on the number of va...

  4. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  5. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  6. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  7. Tourism sector, Travel agencies, and Transport Suppliers: Comparison of Different Estimators in the Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Kovačić Nataša

    2015-11-01

    Full Text Available The paper addresses the effect of external integration (EI with transport suppliers on the efficiency of travel agencies in the tourism sector supply chains. The main aim is the comparison of different estimation methods used in the structural equation modeling (SEM, applied to discover possible relationships between EIs and efficiencies. The latter are calculated by the means of data envelopment analysis (DEA. While designing the structural equation model, the exploratory and confirmatory factor analyses are also used as preliminary statistical procedures. For the estimation of parameters of SEM model, three different methods are explained, analyzed and compared: maximum likelihood (ML method, Bayesian Markov Chain Monte Carlo (BMCMC method, and unweighted least squares (ULS method. The study reveals that all estimation methods calculate comparable estimated parameters. The results also give an evidence of good model fit performance. Besides, the research confirms that the amplified external integration with transport providers leads to increased efficiency of travel agencies, which might be a very interesting finding for the operational management.

  8. Effect of sampling schedule on pharmacokinetic parameter estimates of promethazine in astronauts

    Science.gov (United States)

    Boyd, Jason L.; Wang, Zuwei; Putcha, Lakshmi

    2005-08-01

    Six astronauts on the Shuttle Transport System (STS) participated in an investigation on the pharmacokinetics of promethazine (PMZ), a medication used for the treatment of space motion sickness (SMS) during flight. Each crewmember completed the protocol once during flight and repeated thirty days after returned to Earth. Saliva samples were collected at scheduled times for 72 h after PMZ administration; more frequent samples were collected on the ground than during flight owing to schedule constraints in flight. PMZ concentrations in saliva were determined by a liquid chromatographic/mass spectrometric (LC-MS) assay and pharmacokinetic parameters (PKPs) were calculated using actual flight and ground-based data sets and using time-matched sampling schedule on ground to that during flight. Volume of distribution (Vc) and clearance (Cls) decreased during flight compared to that from time-matched ground data set; however, ClS and Vc estimates were higher for all subjects when partial ground data sets were used for analysis. Area under the curve (AUC) normalized with administered dose was similar in flight and partial ground data; however AUC was significantly lower using time-matched sampling compared with the full data set on ground. Half life (t1/2) was longest during flight, shorter with matched-sampling schedule on ground and shortest when complete data set from ground was used. Maximum concentration (Cmax), time for Cmax (tmax), parameters of drug absorption, depicted a similar trend with lowest and longest respectively, during flight, lower with time- matched ground data and highest and shortest with full ground data.

  9. Comparison of internal radiation doses estimated by MIRD and voxel techniques for a ''family'' of phantoms

    International Nuclear Information System (INIS)

    Smith, T.

    2000-01-01

    The aim of this study was to use a new system of realistic voxel phantoms, based on computed tomography scanning of humans, to assess its ability to specify the internal dosimetry of selected human examples in comparison with the well-established MIRD system of mathematical anthropomorphic phantoms. Differences in specific absorbed fractions between the two systems were inferred by using organ dose estimates as the end point for comparison. A ''family'' of voxel phantoms, comprising an 8-week-old baby, a 7-year-old child and a 38-year-old adult, was used and a close match to these was made by interpolating between organ doses estimated for pairs of the series of six MIRD phantoms. Using both systems, doses were calculated for up to 22 organs for four radiopharmaceuticals with widely differing biodistribution and emission characteristics (technetium-99m pertechnetate, administered without thyroid blocking; iodine-123 iodide; indium-111 antimyosin; oxygen-15 water). Organ dose estimates under the MIRD system were derived using the software MIRDOSE 3, which incorporates specific absorbed fraction (SAF) values for the MIRD phantom series. The voxel system uses software based on the same dose calculation formula in conjunction with SAF values determined by Monte Carlo analysis at the GSF of the three voxel phantoms. Effective doses were also compared. Substantial differences in organ weights were observed between the two systems, 18% differing by more than a factor of 2. Out of a total of 238 organ dose comparisons, 5% differed by more than a factor of 2 between the systems; these included some doses to walls of the GI tract, a significant result in relation to their high tissue weighting factors. Some of the largest differences in dose were associated with organs of lower significance in terms of radiosensitivity (e.g. thymus). In this small series, voxel organ doses tended to exceed MIRD values, on average, and a 10% difference was significant when all 238 organ doses

  10. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  11. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  12. MULTI-LEVEL SAMPLING APPROACH FOR CONTINOUS LOSS DETECTION USING ITERATIVE WINDOW AND STATISTICAL MODEL

    OpenAIRE

    Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani

    2010-01-01

    This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...

  13. Better Fire Emissions Estimates for Tricky Species Illustrated with a Simple Empirical Burn-to-Sample Plume Mode

    Science.gov (United States)

    Chatfield, R. B.; Andreae, M. O.; Lareau, N.

    2017-12-01

    Methodologies for estimating emission factors (EFs) and broader emission relationship (ERs) (for e.g., O3 production or aerosol absorption) have been difficult to make accurate and convincing; this is largely due to non-fire effects on both CO2 and also fire-emitted trace species. We present a new view of these multiple effects as they affect downwind tracer samples observed by aircraft in NASA's ARCTAS and SEAC4RS airborne missions. This view leads to our method for estimates of ERs and EFs that allow spatially detailed views focusing on individual samples, a Mixed Effects Emission Ratio Technique (MERET). We concentrate on presenting a generalized viewpoint: a simple idealized model of a fire plume entraining air from near-flames upward and then outward to a sampling point, a view base on observations of typical situations. Actual evolution of a plume can depend intricately on the fully history of entrainment, entraining concentration levels of CO2 and tracer species, and mixing. Observations suggest that our simple plume model with just two (analyzed) values for entrained CO2 and one or potentially two values for environmental concentrations for each tracer can serve surprisingly well for mixed-effects regression estimates. Such details appears imperative for long-lived gases like CH4, CO, and N2O. In particular, it is difficult to distinguish fire-sourced emissions from air entrained near the flames, entrained in a way proportional to fire intensity. These entraining concentrations may vary significantly from those later in plume evolution. In addition, such detail also highlights behavior of emissions that react on the path to sampling, e.g. fire-sourced or entrained urban NOx. Some caveats regarding poor sampling situations, and some warning signs, based on this empirical plume description and on MERET analyses, are demonstrated. Some information is available when multiple tracers are analyzed. MERET estimates for ERs of short and these long-lived species are

  14. Determination of Aldehyde Dehydrogenase (ALDH Isozymes in Human Cancer Samples - Comparison of Kinetic and Immunochemical Assays

    Directory of Open Access Journals (Sweden)

    Dorota Borecka

    2002-12-01

    Full Text Available A fluorimetric assay of aldehyde dehydrogenase isozymes, based on naphthaldehyde oxidation, is compared with Western Blotting analysis on several clinical samples obtained from surgery. The comparison reveals qualitatively good correlation of ALDH1A1 isozyme detection with two methods and somewhat worse on ALDH3A1 assay.

  15. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.

  16. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  17. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    Science.gov (United States)

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  18. Can a sample of Landsat sensor scenes reliably estimate the global extent of tropical deforestation?

    Science.gov (United States)

    R. L. Czaplewski

    2003-01-01

    Tucker and Townshend (2000) conclude that wall-to-wall coverage is needed to avoid gross errors in estimations of deforestation rates' because tropical deforestation is concentrated along roads and rivers. They specifically question the reliability of the 10% sample of Landsat sensor scenes used in the global remote sensing survey conducted by the Food and...

  19. A Comparison of Software Schedule Estimators

    Science.gov (United States)

    1990-09-01

    SLIM ...................................... 33 SPQR /20 ................................... 35 System -4 .................................... 37 Previous...24 3. PRICE-S Outputs ..................................... 26 4. COCOMO Factors by Category ........................... 28 5. SPQR /20 Activities...actual schedules experienced on the projects. The models analyzed were REVIC, PRICE-S, System-4, SPQR /20, and SEER. ix A COMPARISON OF SOFTWARE

  20. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  1. A comprehensive analyzing and evaluating of the results of a wide scope comparison on the environmental level radioactive samples with γ spectrometer

    International Nuclear Information System (INIS)

    Su Qiong; Cheng Jianping; Wang Xuewu; Fan Jiajin; Chen Boxian

    2001-01-01

    A wide scope comparison on the environmental level radioactive samples by γ spectrometers, that has been done in 1998 - 1999, was introduced. Some original data about the comparison are presented. Comprehensive analyzing and evaluating on the comparison results have been done. A new method used for determining comparison reference values, the Model Real Time Weight Average, is adopted. The method is detailed and compared with other models. The practice shows that the Model Real Time Weight Average adopted is feasible and successful

  2. Comparison of methods for evaluation of wood smoke and estimation of UK ambient concentrations

    Directory of Open Access Journals (Sweden)

    R. M. Harrison

    2012-09-01

    Full Text Available Airborne concentrations of the wood smoke tracers, levoglucosan and fine potassium have been measured at urban and rural sites in the United Kingdom alongside measurements with a multi-wavelength aethalometer. The UK sites, and especially those in cities, show low ratios of levoglucosan to potassium in comparison to the majority of published data. It is concluded that there may be two distinct source types, one from wood stoves and fireplaces with a high organic carbon content, best represented by levoglucosan, the other from larger, modern appliances with a very high burn-out efficiency, best represented by potassium. Based upon levoglucosan concentrations and a conversion factor of 11.2 from levoglucosan to wood smoke mass, average concentrations of wood smoke including winter and summer sampling periods are 0.23 μg m−3 in Birmingham and 0.33 μg m−3 in London, well below concentrations typical of other northern European urban areas. There may be a further contribution from sources of potassium-rich emissions amounting to an estimated 0.08 μg m−3 in Birmingham and 0.30 μg m−3 in London. Concentrations were highly correlated between two London sites separated by 4 km suggesting that a regional source is responsible. Data from the aethalometer are either supportive of these conclusions or suggest higher concentrations, depending upon the way in which the data are analysed.

  3. Perceived Speech Quality Estimation Using DTW Algorithm

    Directory of Open Access Journals (Sweden)

    S. Arsenovski

    2009-06-01

    Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.

  4. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  5. Frequency offset estimation in OFDM systems using Bayesian filtering

    Science.gov (United States)

    Yu, Yihua

    2011-10-01

    Orthogonal frequency division multiplexing (OFDM) is sensitive to carrier frequency offset (CFO) that causes inter-carrier interference (ICI). In this paper, we present two schemes for the CFO estimation, which are based on rejection sampling (RS) and a form of particle filtering (PF) called kernel smoothing technique, respectively. The first scheme is offline estimation, where the observations contained in the OFDM training symbol are treated in the batch mode. The second scheme is online estimation, where the observations in the OFDM training symbol are treated in the sequential manner. Simulations are provided to illustrate the performances of the schemes. Performance comparisons of the two schemes and with other Bayesian methods are provided. Simulation results show that the two schemes are effective when estimating the CFO and can effectively combat the effect of ICI in OFDM systems.

  6. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  7. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  8. A guide to the use of distance sampling to estimate abundance of Karner blue butterflies

    Science.gov (United States)

    Grundel, Ralph

    2015-01-01

    This guide is intended to describe the use of distance sampling as a method for evaluating the abundance of Karner blue butterflies at a location. Other methods for evaluating abundance exist, including mark-release-recapture and index counts derived from Pollard-Yates surveys, for example. Although this guide is not intended to be a detailed comparison of the pros and cons of each type of method, there are important preliminary considerations to think about before selecting any method for evaluating the abundance of Karner blue butterflies.

  9. Does equilibrium passive sampling reflect actual in situ bioaccumulation of PAHs and petroleum hydrocarbon mixtures in aquatic worms?

    NARCIS (Netherlands)

    Muijs, B.|info:eu-repo/dai/nl/194995526; Jonker, M.T.O.|info:eu-repo/dai/nl/175518793

    2012-01-01

    Over the past couple of years, several analytical methods have been developed for assessing the bioavailability of environmental contaminants in sediments and soils. Comparison studies suggest that equilibrium passive sampling methods generally provide the better estimates of internal concentrations

  10. Conditional estimation of local pooled dispersion parameter in small-sample RNA-Seq data improves differential expression test.

    Science.gov (United States)

    Gim, Jungsoo; Won, Sungho; Park, Taesung

    2016-10-01

    High throughput sequencing technology in transcriptomics studies contribute to the understanding of gene regulation mechanism and its cellular function, but also increases a need for accurate statistical methods to assess quantitative differences between experiments. Many methods have been developed to account for the specifics of count data: non-normality, a dependence of the variance on the mean, and small sample size. Among them, the small number of samples in typical experiments is still a challenge. Here we present a method for differential analysis of count data, using conditional estimation of local pooled dispersion parameters. A comprehensive evaluation of our proposed method in the aspect of differential gene expression analysis using both simulated and real data sets shows that the proposed method is more powerful than other existing methods while controlling the false discovery rates. By introducing conditional estimation of local pooled dispersion parameters, we successfully overcome the limitation of small power and enable a powerful quantitative analysis focused on differential expression test with the small number of samples.

  11. Perfluoroalkyl substances in aquatic environment-comparison of fish and passive sampling approaches.

    Science.gov (United States)

    Cerveny, Daniel; Grabic, Roman; Fedorova, Ganna; Grabicova, Katerina; Turek, Jan; Kodes, Vit; Golovko, Oksana; Zlabek, Vladimir; Randak, Tomas

    2016-01-01

    The concentrations of seven perfluoroalkyl substances (PFASs) were investigated in 36 European chub (Squalius cephalus) individuals from six localities in the Czech Republic. Chub muscle and liver tissue were analysed at all sampling sites. In addition, analyses of 16 target PFASs were performed in Polar Organic Chemical Integrative Samplers (POCISs) deployed in the water at the same sampling sites. We evaluated the possibility of using passive samplers as a standardized method for monitoring PFAS contamination in aquatic environments and the mutual relationships between determined concentrations. Only perfluorooctane sulphonate was above the LOQ in fish muscle samples and 52% of the analysed fish individuals exceeded the Environmental Quality Standard for water biota. Fish muscle concentration is also particularly important for risk assessment of fish consumers. The comparison of fish tissue results with published data showed the similarity of the Czech results with those found in Germany and France. However, fish liver analysis and the passive sampling approach resulted in different fish exposure scenarios. The total concentration of PFASs in fish liver tissue was strongly correlated with POCIS data, but pollutant patterns differed between these two matrices. The differences could be attributed to the metabolic activity of the living organism. In addition to providing a different view regarding the real PFAS cocktail to which the fish are exposed, POCISs fulfil the Three Rs strategy (replacement, reduction, and refinement) in animal testing. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Environmental DNA (eDNA sampling improves occurrence and detection estimates of invasive burmese pythons.

    Directory of Open Access Journals (Sweden)

    Margaret E Hunter

    Full Text Available Environmental DNA (eDNA methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR for the Burmese python (Python molurus bivittatus, Northern African python (P. sebae, boa constrictor (Boa constrictor, and the green (Eunectes murinus and yellow anaconda (E. notaeus. Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive

  13. Environmental DNA (eDNA) sampling improves occurrence and detection estimates of invasive burmese pythons.

    Science.gov (United States)

    Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M

    2015-01-01

    Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors

  14. Precipitation estimates and comparison of satellite rainfall data to in situ rain gauge observations to further develop the watershed-modeling capabilities for the Lower Mekong River Basin

    Science.gov (United States)

    Dandridge, C.; Lakshmi, V.; Sutton, J. R. P.; Bolten, J. D.

    2017-12-01

    This study focuses on the lower region of the Mekong River Basin (MRB), an area including Burma, Cambodia, Vietnam, Laos, and Thailand. This region is home to expansive agriculture that relies heavily on annual precipitation over the basin for its prosperity. Annual precipitation amounts are regulated by the global monsoon system and therefore vary throughout the year. This research will lead to improved prediction of floods and management of floodwaters for the MRB. We compare different satellite estimates of precipitation to each other and to in-situ precipitation estimates for the Mekong River Basin. These comparisons will help us determine which satellite precipitation estimates are better at predicting precipitation in the MRB and will help further our understanding of watershed-modeling capabilities for the basin. In this study we use: 1) NOAA's PERSIANN daily 0.25° precipitation estimate Climate Data Record (CDR), 2) NASA's Tropical Rainfall Measuring Mission (TRMM) daily 0.25° estimate, and 3) NASA's Global Precipitation Measurement (GPM) daily 0.1 estimate and 4) 488 in-situ stations located in the lower MRB provide daily precipitation estimates. The PERSIANN CDR precipitation estimate was able to provide the longest data record because it is available from 1983 to present. The TRMM precipitation estimate is available from 2000 to present and the GPM precipitation estimates are available from 2015 to present. It is for this reason that we provide several comparisons between our precipitation estimates. Comparisons were done between each satellite product and the in-situ precipitation estimates based on geographical location and date using the entire available data record for each satellite product for daily, monthly, and yearly precipitation estimates. We found that monthly PERSIANN precipitation estimates were able to explain up to 90% of the variability in station precipitation depending on station location.

  15. A-Train Aerosol Observations Preliminary Comparisons with AeroCom Models and Pathways to Observationally Based All-Sky Estimates

    Science.gov (United States)

    Redemann, J.; Livingston, J.; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; LeBlanc, S.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; hide

    2014-01-01

    We have developed a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. We compare the spatio-temporal distribution of our multi-sensor aerosol retrievals and calculations of seasonal clear-sky aerosol radiative forcing based on the aerosol retrievals to values derived from four models that participated in the latest AeroCom model intercomparison initiative. We find significant inter-model differences, in particular for the aerosol single scattering albedo, which can be evaluated using the multi-sensor A-Train retrievals. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.

  16. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  17. Off-road sampling reveals a different grassland bird community than roadside sampling: implications for survey design and estimates to guide conservation

    Directory of Open Access Journals (Sweden)

    Troy I. Wellicome

    2014-06-01

    concern. Our results highlight the need to develop appropriate corrections for bias in estimates derived from roadside sampling, and the need to design surveys that sample bird communities across a more representative cross-section of the landscape, both near and far from roads.

  18. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  19. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    Science.gov (United States)

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  20. Comparison of pure and 'Latinized' centroidal Voronoi tessellation against various other statistical sampling methods

    International Nuclear Information System (INIS)

    Romero, Vicente J.; Burkardt, John V.; Gunzburger, Max D.; Peterson, Janet S.

    2006-01-01

    A recently developed centroidal Voronoi tessellation (CVT) sampling method is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these sampling methods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities