WorldWideScience

Sample records for sample size planning

  1. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  2. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  3. Special nuclear material inventory sampling plans

    International Nuclear Information System (INIS)

    Vaccaro, H.; Goldman, A.

    1987-01-01

    Since their introduction in 1942, sampling inspection procedures have been common quality assurance practice. The U.S. Department of Energy (DOE) supports such sampling of special nuclear materials inventories. The DOE Order 5630.7 states, Operations Offices may develop and use statistically valid sampling plans appropriate for their site-specific needs. The benefits for nuclear facilities operations include reduced worker exposure and reduced work load. Improved procedures have been developed for obtaining statistically valid sampling plans that maximize these benefits. The double sampling concept is described and the resulting sample sizes for double sample plans are compared with other plans. An algorithm is given for finding optimal double sampling plans that assist in choosing the appropriate detection and false alarm probabilities for various sampling plans

  4. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  5. A comparison of attribute sampling plans

    International Nuclear Information System (INIS)

    Lanning, B.M.

    1997-05-01

    This report describes, compares, and provides sample size selection criteria for the most common sampling plans for attribute data (i.e., data that is qualitative in nature such as Pass-Fail, Yes-No, Defect-Nondefect data). This report is being issued as a guide in prudently choosing the correct sampling plan to meet statistical plan objectives. The report discusses three types of sampling plans: AQL (Acceptable Quality Level expressed as a percent), RQL (Rejectable Quality Level as a percent), and the AQL/RQL plan which emphasizes both risks simultaneously. These plans are illustrated with six examples, one of which is an inventory of UF 6 cans whose weight must agree within 100 grams of its listed weight to be acceptable

  6. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  7. Group Acceptance Sampling Plan for Lifetime Data Using Generalized Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2010-02-01

    Full Text Available In this paper, a group acceptance sampling plan (GASP is introduced for the situations when lifetime of the items follows the generalized Pareto distribution. The design parameters such as minimum group size and acceptance number are determined when the consumer’s risk and the test termination time are specified. The proposed sampling plan is compared with the existing sampling plan. It is concluded that the proposed sampling plan performs better than the existing plan in terms of minimum sample size required to reach the same decision.

  8. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  9. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  10. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    Science.gov (United States)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  11. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  12. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  13. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  14. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  15. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  17. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  18. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  19. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  20. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  1. Sampling plans in attribute mode with multiple levels of precision

    International Nuclear Information System (INIS)

    Franklin, M.

    1986-01-01

    This paper describes a method for deriving sampling plans for nuclear material inventory verification. The method presented is different from the classical approach which envisages two levels of measurement precision corresponding to NDA and DA. In the classical approach the precisions of the two measurement methods are taken as fixed parameters. The new approach is based on multiple levels of measurement precision. The design of the sampling plan consists of choosing the number of measurement levels, the measurement precision to be used at each level and the sample size to be used at each level

  2. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  3. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  4. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  6. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  7. Spatial distribution and sequential sampling plans for Tuta absoluta (Lepidoptera: Gelechiidae) in greenhouse tomato crops.

    Science.gov (United States)

    Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino

    2015-09-01

    The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.

  8. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  9. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  10. Special nuclear material inventory sampling plans

    International Nuclear Information System (INIS)

    Vaccaro, H.S.; Goldman, A.S.

    1987-01-01

    This paper presents improved procedures for obtaining statistically valid sampling plans for nuclear facilities. The double sampling concept and methods for developing optimal double sampling plans are described. An algorithm is described that is satisfactory for finding optimal double sampling plans and choosing appropriate detection and false alarm probabilities

  11. IAEA Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    Geist, William H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-15

    The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.

  12. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  13. Evaluation of sampling plans for in-service inspection of steam generator tubes

    International Nuclear Information System (INIS)

    Kurtz, R.J.; Heasler, P.G.; Baird, D.B.

    1994-02-01

    This report summarizes the results of three previous studies to evaluate and compare the effectiveness of sampling plans for steam generator tube inspections. An analytical evaluation and Monte Carlo simulation techniques were the methods used to evaluate sampling plan performance. To test the performance of candidate sampling plans under a variety of conditions, ranges of inspection system reliability were considered along with different distributions of tube degradation. Results from the eddy current reliability studies performed with the retired-from-service Surry 2A steam generator were utilized to guide the selection of appropriate probability of detection and flaw sizing models for use in the analysis. Different distributions of tube degradation were selected to span the range of conditions that might exist in operating steam generators. The principal means of evaluating sampling performance was to determine the effectiveness of the sampling plan for detecting and plugging defective tubes. A summary of key results from the eddy current reliability studies is presented. The analytical and Monte Carlo simulation analyses are discussed along with a synopsis of key results and conclusions

  14. Food safety assurance systems: Microbiological testing, sampling plans, and microbiological criteria

    NARCIS (Netherlands)

    Zwietering, M.H.; Ross, T.; Gorris, L.G.M.

    2014-01-01

    Microbiological criteria give information about the quality or safety of foods. A key component of a microbiological criterion is the sampling plan. Considering: (1) the generally low level of pathogens that are deemed tolerable in foods, (2) large batch sizes, and (3) potentially substantial

  15. Effect of beamlet step-size on IMRT plan quality

    International Nuclear Information System (INIS)

    Zhang Guowei; Jiang Ziping; Shepard, David; Earl, Matt; Yu, Cedric

    2005-01-01

    We have studied the degree to which beamlet step-size impacts the quality of intensity modulated radiation therapy (IMRT) treatment plans. Treatment planning for IMRT begins with the application of a grid that divides each beam's-eye-view of the target into a number of smaller beamlets (pencil beams) of radiation. The total dose is computed as a weighted sum of the dose delivered by the individual beamlets. The width of each beamlet is set to match the width of the corresponding leaf of the multileaf collimator (MLC). The length of each beamlet (beamlet step-size) is parallel to the direction of leaf travel. The beamlet step-size represents the minimum stepping distance of the leaves of the MLC and is typically predetermined by the treatment planning system. This selection imposes an artificial constraint because the leaves of the MLC and the jaws can both move continuously. Removing the constraint can potentially improve the IMRT plan quality. In this study, the optimized results were achieved using an aperture-based inverse planning technique called direct aperture optimization (DAO). We have tested the relationship between pencil beam step-size and plan quality using the American College of Radiology's IMRT test case. For this case, a series of IMRT treatment plans were produced using beamlet step-sizes of 1, 2, 5, and 10 mm. Continuous improvements were seen with each reduction in beamlet step size. The maximum dose to the planning target volume (PTV) was reduced from 134.7% to 121.5% and the mean dose to the organ at risk (OAR) was reduced from 38.5% to 28.2% as the beamlet step-size was reduced from 10 to 1 mm. The smaller pencil beam sizes also led to steeper dose gradients at the junction between the target and the critical structure with gradients of 6.0, 7.6, 8.7, and 9.1 dose%/mm achieved for beamlet step sizes of 10, 5, 2, and 1 mm, respectively

  16. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  17. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  18. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  19. 40 CFR 141.802 - Coliform sampling plan.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Coliform sampling plan. 141.802... sampling plan. (a) Each air carrier under this subpart must develop a coliform sampling plan covering each... required actions, including repeat and follow-up sampling, corrective action, and notification of...

  20. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  1. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  2. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  3. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  5. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  6. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  7. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  8. Short-Run Contexts and Imperfect Testing for Continuous Sampling Plans

    Directory of Open Access Journals (Sweden)

    Mirella Rodriguez

    2018-04-01

    Full Text Available Continuous sampling plans are used to ensure a high level of quality for items produced in long-run contexts. The basic idea of these plans is to alternate between 100% inspection and a reduced rate of inspection frequency. Any inspected item that is found to be defective is replaced with a non-defective item. Because not all items are inspected, some defective items will escape to the customer. Analytical formulas have been developed that measure both the customer perceived quality and also the level of inspection effort. The analysis of continuous sampling plans does not apply to short-run contexts, where only a finite-size batch of items is to be produced. In this paper, a simulation algorithm is designed and implemented to analyze the customer perceived quality and the level of inspection effort for short-run contexts. A parameter representing the effectiveness of the test used during inspection is introduced to the analysis, and an analytical approximation is discussed. An application of the simulation algorithm that helped answer questions for the U.S. Navy is discussed.

  9. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  10. Waste classification sampling plan

    International Nuclear Information System (INIS)

    Landsman, S.D.

    1998-01-01

    The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998

  11. Appendix F - Sample Contingency Plan

    Science.gov (United States)

    This sample Contingency Plan in Appendix F is intended to provide examples of contingency planning as a reference when a facility determines that the required secondary containment is impracticable, pursuant to 40 CFR §112.7(d).

  12. Visual Sample Plan Version 7.0 User's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Matzke, Brett D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Newburn, Lisa LN [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hathaway, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bramer, Lisa M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wilson, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dowson, Scott T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sego, Landon H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Pulsipher, Brent A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-03-01

    User's guide for VSP 7.0 This user's guide describes Visual Sample Plan (VSP) Version 7.0 and provides instructions for using the software. VSP selects the appropriate number and location of environmental samples to ensure that the results of statistical tests performed to provide input to risk decisions have the required confidence and performance. VSP Version 7.0 provides sample-size equations or algorithms needed by specific statistical tests appropriate for specific environmental sampling objectives. It also provides data quality assessment and statistical analysis functions to support evaluation of the data and determine whether the data support decisions regarding sites suspected of contamination. The easy-to-use program is highly visual and graphic. VSP runs on personal computers with Microsoft Windows operating systems (XP, Vista, Windows 7, and Windows 8). Designed primarily for project managers and users without expertise in statistics, VSP is applicable to two- and three-dimensional populations to be sampled (e.g., rooms and buildings, surface soil, a defined layer of subsurface soil, water bodies, and other similar applications) for studies of environmental quality. VSP is also applicable for designing sampling plans for assessing chem/rad/bio threat and hazard identification within rooms and buildings, and for designing geophysical surveys for unexploded ordnance (UXO) identification.

  13. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  15. Statistical sampling plans

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    In auditing and in inspection, one selects a number of items by some set of procedures and performs measurements which are compared with the operator's values. This session considers the problem of how to select the samples to be measured, and what kinds of measurements to make. In the inspection situation, the ultimate aim is to independently verify the operator's material balance. The effectiveness of the sample plan in achieving this objective is briefly considered. The discussion focuses on the model plant

  16. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    Science.gov (United States)

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  17. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Fixed-Precision Sequential Sampling Plans for Estimating Alfalfa Caterpillar, Colias lesbia, Egg Density in Alfalfa, Medicago sativa, Fields in Córdoba, Argentina

    Science.gov (United States)

    Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma

    2013-01-01

    The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840

  19. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  20. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  1. Hanford site transuranic waste sampling plan

    International Nuclear Information System (INIS)

    GREAGER, T.M.

    1999-01-01

    This sampling plan (SP) describes the selection of containers for sampling of homogeneous solids and soil/gravel and for visual examination of transuranic and mixed transuranic (collectively referred to as TRU) waste generated at the U.S. Department of Energy (DOE) Hanford Site. The activities described in this SP will be conducted under the Hanford Site TRU Waste Certification Program. This SP is designed to meet the requirements of the Transuranic Waste Characterization Quality Assurance Program Plan (CAO-94-1010) (DOE 1996a) (QAPP), site-specific implementation of which is described in the Hanford Site Transuranic Waste Characterization Program Quality Assurance Project Plan (HNF-2599) (Hanford 1998b) (QAPP). The QAPP defines the quality assurance (QA) requirements and protocols for TRU waste characterization activities at the Hanford Site. In addition, the QAPP identifies responsible organizations, describes required program activities, outlines sampling and analysis strategies, and identifies procedures for characterization activities. The QAPP identifies specific requirements for TRU waste sampling plans. Table 1-1 presents these requirements and indicates sections in this SP where these requirements are addressed

  2. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  4. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  5. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  6. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  7. Appendix E - Sample Production Facility Plan

    Science.gov (United States)

    This sample Spill Prevention, Control and Countermeasure (SPCC) Plan in Appendix E is intended to provide examples and illustrations of how a production facility could address a variety of scenarios in its SPCC Plan.

  8. Visual Sample Plan (VSP) - FIELDS Integration

    Energy Technology Data Exchange (ETDEWEB)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Hassig, Nancy L.; Carlson, Deborah K.; Bing-Canar, John; Cooper, Brian; Roth, Chuck

    2003-04-19

    Two software packages, VSP 2.1 and FIELDS 3.5, are being used by environmental scientists to plan the number and type of samples required to meet project objectives, display those samples on maps, query a database of past sample results, produce spatial models of the data, and analyze the data in order to arrive at defensible decisions. VSP 2.0 is an interactive tool to calculate optimal sample size and optimal sample location based on user goals, risk tolerance, and variability in the environment and in lab methods. FIELDS 3.0 is a set of tools to explore the sample results in a variety of ways to make defensible decisions with quantified levels of risk and uncertainty. However, FIELDS 3.0 has a small sample design module. VSP 2.0, on the other hand, has over 20 sampling goals, allowing the user to input site-specific assumptions such as non-normality of sample results, separate variability between field and laboratory measurements, make two-sample comparisons, perform confidence interval estimation, use sequential search sampling methods, and much more. Over 1,000 copies of VSP are in use today. FIELDS is used in nine of the ten U.S. EPA regions, by state regulatory agencies, and most recently by several international countries. Both software packages have been peer-reviewed, enjoy broad usage, and have been accepted by regulatory agencies as well as site project managers as key tools to help collect data and make environmental cleanup decisions. Recently, the two software packages were integrated, allowing the user to take advantage of the many design options of VSP, and the analysis and modeling options of FIELDS. The transition between the two is simple for the user – VSP can be called from within FIELDS, automatically passing a map to VSP and automatically retrieving sample locations and design information when the user returns to FIELDS. This paper will describe the integration, give a demonstration of the integrated package, and give users download

  9. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  10. Customer acquisition plan for a small-size entrepreneur

    OpenAIRE

    Puotiniemi, Tiia

    2014-01-01

    The aim of this thesis was to draw a customer acquisition plan and improve the marketing planning in the company. Therefore, the final result was the market research summary. Development of the marketing strategy planning was based on internal and external situation analyses. This thesis is concentrated on a small-size operator’s business and therefore the internet marketing was brought up with its profitable benefits and advantages. The thesis was written as an auxiliary guide to build u...

  11. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  12. A Fixed-Precision Sequential Sampling Plan for the Potato Tuberworm Moth, Phthorimaea operculella Zeller (Lepidoptera: Gelechidae), on Potato Cultivars.

    Science.gov (United States)

    Shahbi, M; Rajabpour, A

    2017-08-01

    Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.

  13. Long-Term Ecological Monitoring Field Sampling Plan for 2007

    International Nuclear Information System (INIS)

    T. Haney R. VanHorn

    2007-01-01

    This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used to determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007 investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality

  14. Long-Term Ecological Monitoring Field Sampling Plan for 2007

    Energy Technology Data Exchange (ETDEWEB)

    T. Haney

    2007-07-31

    This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used t determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007 investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality.

  15. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  16. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  17. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  18. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  19. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  20. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  1. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  2. Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Marutzky, Sam; Farnham, Irene

    2014-10-01

    The purpose of the Nevada National Security Site (NNSS) Integrated Sampling Plan (referred to herein as the Plan) is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office (NNSA/NFO) Underground Test Area (UGTA) Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP). The Plan’s scope comprises sample collection and analysis requirements relevant to assessing the extent of groundwater contamination from underground nuclear testing. This Plan identifies locations to be sampled by corrective action unit (CAU) and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well-purging requirements, detection levels, and accuracy requirements; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling of interest to UGTA. This Plan does not address compliance with requirements for wells that supply the NNSS public water system or wells involved in a permitted activity.

  3. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  4. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Final Sampling and Analysis Plan for Background Sampling, Fort Sheridan, Illinois

    National Research Council Canada - National Science Library

    1995-01-01

    .... This Background Sampling and Analysis Plan (BSAP) is designed to address this issue through the collection of additional background samples at Fort Sheridan to support the statistical analysis and the Baseline Risk Assessment (BRA...

  6. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  7. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  8. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  9. Integrated sampling and analysis plan for samples measuring >10 mrem/hour

    International Nuclear Information System (INIS)

    Haller, C.S.

    1992-03-01

    This integrated sampling and analysis plan was prepared to assist in planning and scheduling of Hanford Site sampling and analytical activities for all waste characterization samples that measure greater than 10 mrem/hour. This report also satisfies the requirements of the renegotiated Interim Milestone M-10-05 of the Hanford Federal Facility Agreement and Consent Order (the Tri-Party Agreement). For purposes of comparing the various analytical needs with the Hanford Site laboratory capabilities, the analytical requirements of the various programs were normalized by converting required laboratory effort for each type of sample to a common unit of work, the standard analytical equivalency unit (AEU). The AEU approximates the amount of laboratory resources required to perform an extensive suite of analyses on five core segments individually plus one additional suite of analyses on a composite sample derived from a mixture of the five core segments and prepare a validated RCRA-type data package

  10. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  11. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    OpenAIRE

    Heckmann, Mark; Burk, Lukas

    2017-01-01

    The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...

  12. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  13. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  14. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  15. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  16. Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Farnham, Irene

    2018-03-01

    The purpose is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the DOE/EM Nevada Program’s UGTA Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP) (NNSA/NFO, 2015); Federal Facility Agreement and Consent Order (FFACO) (1996, as amended); and DOE Order 458.1, Radiation Protection of the Public and the Environment (DOE, 2013). The Plan’s scope comprises sample collection and analysis requirements relevant to assessing both the extent of groundwater contamination from underground nuclear testing and impact of testing on water quality in downgradient communities. This Plan identifies locations to be sampled by CAU and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well purging, detection levels, and accuracy requirements/recommendations; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling analytes of interest to UGTA. Information used in the Plan development—including the rationale for selection of wells, sampling frequency, and the analytical suite—is discussed under separate cover (N-I, 2014) and is not reproduced herein. This Plan does not address compliance for those wells involved in a permitted activity. Sampling and analysis requirements associated with these wells are described in their respective permits and are discussed in NNSS environmental reports (see Section 5.2). In addition, sampling for UGTA CAUs that are in the Closure Report (CR) stage are not included in this Plan. Sampling requirements for these CAUs are described in the CR

  17. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Use of sales and operations planning in small and medium-sized enterprises

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2013-03-01

    Full Text Available Background: Increasing competitiveness in the market, customer expectations related to the shortening of the deadlines and the reduction of prices of products and services force companies to improve the efficiency of internal processes. The integration of planning process is one of possible ways to achieve this aim. The integration of planning processes by the use of SOP model (Sales and Operations Planning is a method to implement this idea. The study allowed to identify ways to implement the process of sales and operations planning in small and medium-sized enterprises. Material and methods: The study was conducted in companies from different industries. The research method was in-depth interviews conducted with managers of companies or persons occupying management positions in the organizational process of implementing sales and operations planning. Results: During the survey, 10 companies were asked about the use of sales and operations planning, its elements and organizational aspects of its development, by the company. Conclusions: The use of sales and operations plan is closely dependent on the size of the company and its localization in the supply chain. Small enterprises are not interested in the integration of the planning process due to the small scale of operations and the centralization of decision-making process. Medium-sized enterprises, due to the increased complexity of the processes of planning, see the benefits of their integration in the SOP model.

  19. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  20. 42 CFR 431.814 - Sampling plan and procedures.

    Science.gov (United States)

    2010-10-01

    ... reliability of the reduced sample. (4) The sample selection procedure. Systematic random sampling is... sampling, and yield estimates with the same or better precision than achieved in systematic random sampling... 42 Public Health 4 2010-10-01 2010-10-01 false Sampling plan and procedures. 431.814 Section 431...

  1. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  2. UMTRA project water sampling and analysis plan, Gunnison, Colorado

    International Nuclear Information System (INIS)

    1994-06-01

    This water sampling and analysis plan summarizes the results of previous water sampling activities and the plan for water sampling activities for calendar year 1994. A buffer zone monitoring plan is included as an appendix. The buffer zone monitoring plan is designed to protect the public from residual contamination that entered the ground water as a result of former milling operations. Surface remedial action at the Gunnison Uranium Mill Tailings Remedial Action Project site began in 1992; completion is expected in 1995. Ground water and surface water will be sampled semiannually in 1994 at the Gunnison processing site (GUN-01) and disposal site (GUN-08). Results of previous water sampling at the Gunnison processing site indicate that ground water in the alluvium is contaminated by the former uranium processing activities. Background ground water conditions have been established in the uppermost aquifer (Tertiary gravels) at the Gunnison disposal site. The monitor well locations provide a representative distribution of sampling points to characterize ground water quality and ground water flow conditions in the vicinity of the sites. The list of analytes has been modified with time to reflect constituents that are related to uranium processing activities and the parameters needed for geochemical evaluation. Water sampling will be conducted at least semiannually during and one year following the period of construction activities, to comply with the ground water protection strategy discussed in the remedial action plan (DOE, 1992a)

  3. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  5. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  6. Sample Lesson Plans. Management for Effective Teaching.

    Science.gov (United States)

    Fairfax County Public Schools, VA. Dept. of Instructional Services.

    This guide is part of the Management for Effective Teaching (MET) support kit, a pilot project developed by the Fairfax County (Virginia) Public Schools to assist elementary school teachers in planning, managaing, and implementing the county's curriculum, Program of Studies (POS). In this guide, a sample lesson plan of a teaching-learning activity…

  7. Sample management implementation plan: Salt Repository Project

    International Nuclear Information System (INIS)

    1987-01-01

    The purpose of the Sample Management Implementation Plan is to define management controls and building requirements for handling materials collected during the site characterization of the Deaf Smith County, Texas, site. This work will be conducted for the US Department of Energy Salt Repository Project Office (SRPO). The plan provides for controls mandated by the US Nuclear Regulatory Commission and the US Environmental Protection Agency. Salt Repository Project (SRP) Sample Management will interface with program participants who request, collect, and test samples. SRP Sample Management will be responsible for the following: (1) preparing samples; (2) ensuring documentation control; (3) providing for uniform forms, labels, data formats, and transportation and storage requirements; and (4) identifying sample specifications to ensure sample quality. The SRP Sample Management Facility will be operated under a set of procedures that will impact numerous program participants. Requesters of samples will be responsible for definition of requirements in advance of collection. Sample requests for field activities will be approved by the SRPO, aided by an advisory group, the SRP Sample Allocation Committee. This document details the staffing, building, storage, and transportation requirements for establishing an SRP Sample Management Facility. Materials to be managed in the facility include rock core and rock discontinuities, soils, fluids, biota, air particulates, cultural artifacts, and crop and food stuffs. 39 refs., 3 figs., 11 tabs

  8. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  9. Tumor Size on Abdominal MRI Versus Pathologic Specimen in Resected Pancreatic Adenocarcinoma: Implications for Radiation Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Hall, William A., E-mail: whall4@emory.edu [Department of Radiation Oncology, Emory University, Atlanta, Georgia (United States); Winship Cancer Institute, Emory University, Atlanta, Georgia (United States); Mikell, John L. [Department of Radiation Oncology, Emory University, Atlanta, Georgia (United States); Winship Cancer Institute, Emory University, Atlanta, Georgia (United States); Mittal, Pardeep [Department of Radiology, Emory University, Atlanta, Georgia (United States); Colbert, Lauren [Department of Radiation Oncology, Emory University, Atlanta, Georgia (United States); Prabhu, Roshan S. [Department of Radiation Oncology, Emory University, Atlanta, Georgia (United States); Winship Cancer Institute, Emory University, Atlanta, Georgia (United States); Kooby, David A. [Department of Surgery, Emory University, Atlanta, Georgia (United States); Winship Cancer Institute, Emory University, Atlanta, Georgia (United States); Nickleach, Dana [Biostatistics and Bioinformatics Shared Resource, Emory University, Atlanta, Georgia (United States); Winship Cancer Institute, Emory University, Atlanta, Georgia (United States); Hanley, Krisztina [Department of Pathology, Emory University, Atlanta, Georgia (United States); Sarmiento, Juan M. [Department of Surgery, Emory University, Atlanta, Georgia (United States); Ali, Arif N.; Landry, Jerome C. [Department of Radiation Oncology, Emory University, Atlanta, Georgia (United States); Winship Cancer Institute, Emory University, Atlanta, Georgia (United States)

    2013-05-01

    Purpose: We assessed the accuracy of abdominal magnetic resonance imaging (MRI) for determining tumor size by comparing the preoperative contrast-enhanced T1-weighted gradient echo (3-dimensional [3D] volumetric interpolated breath-hold [VIBE]) MRI tumor size with pathologic specimen size. Methods and Materials: The records of 92 patients who had both preoperative contrast-enhanced 3D VIBE MRI images and detailed pathologic specimen measurements were available for review. Primary tumor size from the MRI was independently measured by a single diagnostic radiologist (P.M.) who was blinded to the pathology reports. Pathologic tumor measurements from gross specimens were obtained from the pathology reports. The maximum dimensions of tumor measured in any plane on the MRI and the gross specimen were compared. The median difference between the pathology sample and the MRI measurements was calculated. A paired t test was conducted to test for differences between the MRI and pathology measurements. The Pearson correlation coefficient was used to measure the association of disparity between the MRI and pathology sizes with the pathology size. Disparities relative to pathology size were also examined and tested for significance using a 1-sample t test. Results: The median patient age was 64.5 years. The primary site was pancreatic head in 81 patients, body in 4, and tail in 7. Three patients were American Joint Commission on Cancer stage IA, 7 stage IB, 21 stage IIA, 58 stage IIB, and 3 stage III. The 3D VIBE MRI underestimated tumor size by a median difference of 4 mm (range, −34-22 mm). The median largest tumor dimensions on MRI and pathology specimen were 2.65 cm (range, 1.5-9.5 cm) and 3.2 cm (range, 1.3-10 cm), respectively. Conclusions: Contrast-enhanced 3D VIBE MRI underestimates tumor size by 4 mm when compared with pathologic specimen. Advanced abdominal MRI sequences warrant further investigation for radiation therapy planning in pancreatic adenocarcinoma before

  10. Tumor Size on Abdominal MRI Versus Pathologic Specimen in Resected Pancreatic Adenocarcinoma: Implications for Radiation Treatment Planning

    International Nuclear Information System (INIS)

    Hall, William A.; Mikell, John L.; Mittal, Pardeep; Colbert, Lauren; Prabhu, Roshan S.; Kooby, David A.; Nickleach, Dana; Hanley, Krisztina; Sarmiento, Juan M.; Ali, Arif N.; Landry, Jerome C.

    2013-01-01

    Purpose: We assessed the accuracy of abdominal magnetic resonance imaging (MRI) for determining tumor size by comparing the preoperative contrast-enhanced T1-weighted gradient echo (3-dimensional [3D] volumetric interpolated breath-hold [VIBE]) MRI tumor size with pathologic specimen size. Methods and Materials: The records of 92 patients who had both preoperative contrast-enhanced 3D VIBE MRI images and detailed pathologic specimen measurements were available for review. Primary tumor size from the MRI was independently measured by a single diagnostic radiologist (P.M.) who was blinded to the pathology reports. Pathologic tumor measurements from gross specimens were obtained from the pathology reports. The maximum dimensions of tumor measured in any plane on the MRI and the gross specimen were compared. The median difference between the pathology sample and the MRI measurements was calculated. A paired t test was conducted to test for differences between the MRI and pathology measurements. The Pearson correlation coefficient was used to measure the association of disparity between the MRI and pathology sizes with the pathology size. Disparities relative to pathology size were also examined and tested for significance using a 1-sample t test. Results: The median patient age was 64.5 years. The primary site was pancreatic head in 81 patients, body in 4, and tail in 7. Three patients were American Joint Commission on Cancer stage IA, 7 stage IB, 21 stage IIA, 58 stage IIB, and 3 stage III. The 3D VIBE MRI underestimated tumor size by a median difference of 4 mm (range, −34-22 mm). The median largest tumor dimensions on MRI and pathology specimen were 2.65 cm (range, 1.5-9.5 cm) and 3.2 cm (range, 1.3-10 cm), respectively. Conclusions: Contrast-enhanced 3D VIBE MRI underestimates tumor size by 4 mm when compared with pathologic specimen. Advanced abdominal MRI sequences warrant further investigation for radiation therapy planning in pancreatic adenocarcinoma before

  11. Optimal Multi-Level Lot Sizing for Requirements Planning Systems

    OpenAIRE

    Earle Steinberg; H. Albert Napier

    1980-01-01

    The wide spread use of advanced information systems such as Material Requirements Planning (MRP) has significantly altered the practice of dependent demand inventory management. Recent research has focused on development of multi-level lot sizing heuristics for such systems. In this paper, we develop an optimal procedure for the multi-period, multi-product, multi-level lot sizing problem by modeling the system as a constrained generalized network with fixed charge arcs and side constraints. T...

  12. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  13. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  14. Specified assurance level sampling procedure

    International Nuclear Information System (INIS)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level

  15. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  17. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    Science.gov (United States)

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  18. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  19. A Total Quality-Control Plan with Right-Sized Statistical Quality-Control.

    Science.gov (United States)

    Westgard, James O

    2017-03-01

    A new Clinical Laboratory Improvement Amendments option for risk-based quality-control (QC) plans became effective in January, 2016. Called an Individualized QC Plan, this option requires the laboratory to perform a risk assessment, develop a QC plan, and implement a QC program to monitor ongoing performance of the QC plan. Difficulties in performing a risk assessment may limit validity of an Individualized QC Plan. A better alternative is to develop a Total QC Plan including a right-sized statistical QC procedure to detect medically important errors. Westgard Sigma Rules provides a simple way to select the right control rules and the right number of control measurements. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  1. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  2. The Impact of Desired Family Size Upon Family Planning Practices in Rural East Pakistan

    Science.gov (United States)

    Mosena, Patricia Wimberley

    1971-01-01

    Results indicated that women whose desired family size is equal to or less than their actual family size have significantly greater frequencies practicing family planning than women whose desired size exceeds their actual size. (Author)

  3. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  4. Feasible sampling plan for Bemisia tabaci control decision-making in watermelon fields.

    Science.gov (United States)

    Lima, Carlos Ho; Sarmento, Renato A; Pereira, Poliana S; Galdino, Tarcísio Vs; Santos, Fábio A; Silva, Joedna; Picanço, Marcelo C

    2017-11-01

    The silverleaf whitefly Bemisia tabaci is one of the most important pests of watermelon fields worldwide. Conventional sampling plans are the starting point for the generation of decision-making systems of integrated pest management programs. The aim of this study was to determine a conventional sampling plan for B. tabaci in watermelon fields. The optimal leaf for B. tabaci adult sampling was the 6 th most apical leaf. Direct counting was the best pest sampling technique. Crop pest densities fitted the negative binomial distribution and had a common aggregation parameter (K common ). The sampling plan consisted of evaluating 103 samples per plot. This sampling plan was conducted for 56 min, costing US$ 2.22 per sampling and with a 10% maximum evaluation error. The sampling plan determined in this study can be adopted by farmers because it enables the adequate evaluation of B. tabaci populations in watermelon fields (10% maximum evaluation error) and is a low-cost (US$ 2.22 per sampling), fast (56 min per sampling) and feasible (because it may be used in a standardized way throughout the crop cycle) technique. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  5. UMTRA project water sampling and analysis plan, Tuba City, Arizona

    International Nuclear Information System (INIS)

    1996-02-01

    Planned, routine ground water sampling activities at the U.S. Department of Energy (DOE) Uranium Mill Tailings Remedial Action (UMTRA) Project site in Tuba City, Arizona, are described in the following sections of this water sampling and analysis plan (WSAP). This plan identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the stations routinely monitored at the site. The ground water data are used for site characterization and risk assessment. The regulatory basis for routine ground water monitoring at UMTRA Project sites is derived from the U.S. Environmental Protection Agency (EPA) regulations in 40 CFR Part 192 (1994) and the final EPA standards of 1995 (60 FR 2854). Sampling procedures are guided by the UMTRA Project standard operating procedures (SOP) (JEG, n.d.), and the most effective technical approach for the site

  6. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  7. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  8. Impact of cone-beam computed tomography on implant planning and on prediction of implant size

    Energy Technology Data Exchange (ETDEWEB)

    Pedroso, Ludmila Assuncao de Mello; Silva, Maria Alves Garcia Santos, E-mail: ludmilapedroso@hotmail.com [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Fac. de Odontologia; Garcia, Robson Rodrigues [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Fac. de Odontologia. Dept. de Medicina Oral; Leles, Jose Luiz Rodrigues [Universidade Paulista (UNIP), Goiania, GO (Brazil). Fac. de Odontologia. Dept. de Cirurgia; Leles, Claudio Rodrigues [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Fac. de Odontologia. Dept. de Prevencao e Reabilitacao Oral

    2013-11-15

    The aim was to investigate the impact of cone-beam computed tomography (CBCT) on implant planning and on prediction of final implant size. Consecutive patients referred for implant treatment were submitted to clinical examination, panoramic (PAN) radiography and a CBCT exam. Initial planning of implant length and width was assessed based on clinical and PAN exams, and final planning, on CBCT exam to complement diagnosis. The actual dimensions of the implants placed during surgery were compared with those obtained during initial and final planning, using the McNemmar test (p < 0.05). The final sample comprised 95 implants in 27 patients, distributed over the maxilla and mandible. Agreement in implant length was 50.5% between initial and final planning, and correct prediction of the actual implant length was 40.0% and 69.5%, using PAN and CBCT exams, respectively. Agreement in implant width assessment ranged from 69.5% to 73.7%. A paired comparison of the frequency of changes between initial or final planning and implant placement (McNemmar test) showed greater frequency of changes in initial planning for implant length (p < 0.001), but not for implant width (p = 0.850). The frequency of changes was not influenced by implant location at any stage of implant planning (chi-square test, p > 0.05). It was concluded that CBCT improves the ability of predicting the actual implant length and reduces inaccuracy in surgical dental implant planning. (author)

  9. Hydrogeologic investigations sampling plan: Revision 0

    International Nuclear Information System (INIS)

    1988-11-01

    The goal of this sampling plan is to identify and develop specific plans for those investigative actions necessary to: (1) characterize the hydrologic regime; (2) define the extent and impact of contamination; and (3) predict future contaminant migration for the Weldon Spring Site (WSS) and vicinity. The plan is part of the Weldon Spring Site Remedial Action Project (WSSRAP) sponsored by the US Department of Energy (DOE) and has been developed in accordance with US EPA Remedial Investigation (RI) and Data Quality Objective (DQO) guidelines. The plan consists of a sequence of activities including the evaluation of data, development of a conceptual model, identification of data uses and needs, and the design and implementation of a data collection program. Data will be obtained to: (1) confirm the presence or absence of contaminants; (2) define contaminant sources and modes of transport; (3) delineate extent of contaminant migration and predict future migration; and (4) provide information to support the evaluation and selection of remedial actions. 81 refs., 62 figs., 26 tabs

  10. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  11. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  12. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  13. UMTRA project water sampling and analysis plan, Maybell, Colorado

    International Nuclear Information System (INIS)

    1994-06-01

    This water sampling and analysis plan (WSAP) describes planned water sampling activities and provides the regulatory and technical basis for ground water sampling in 1994 at the US Department of Energy's (DOE) Uranium Mill Tailings Remedial Action (UMTRA) Project site in Maybell, Colorado. The WSAP identifies and justifies sampling locations, analytical parameters, and sampling frequencies at the site. The ground water data will be used for site characterization and risk assessment. The regulatory basis for the ground water and surface water monitoring activities is derived from the EPA regulations in 40 CFR Part 192 (1993) and the proposed EPA standards of 1987 (52 FR 36000). Sampling procedures are guided by the UMTRA Project standard operating procedures (SOP) (JEG, n.d.), the Technical Approach Document (TAD) (DOE, 1989), and the most effective technical approach for the site. This WSAP also includes a summary and the results of water sampling activities from 1989 through 1992 (no sampling was performed in 1993)

  14. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  15. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  16. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  17. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  18. Planning Considerations for a Mars Sample Receiving Facility: Summary and Interpretation of Three Design Studies

    Science.gov (United States)

    Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  19. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory; Amato, Nancy M.

    2012-01-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5

  20. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  1. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  2. UMTRA Project water sampling and analysis plan, Gunnison, Colorado: Revision 1

    International Nuclear Information System (INIS)

    1994-11-01

    This water sampling and analysis plan summarizes the results of previous water sampling activities and the plan for future water sampling activities, in accordance with the Guidance Document for Preparing Sampling and Analysis Plans for UMTRA Sites. A buffer zone monitoring plan for the Dos Rios Subdivision is included as an appendix. The buffer zone monitoring plan was developed to ensure continued protection to the public from residual contamination. The buffer zone is beyond the area depicted as contaminated ground water due to former milling operations. Surface remedial action at the Gunnison Uranium Mill Tailings Remedial Action Project site began in 1992; completion is expected in 1995. Ground water and surface water will be sampled semiannually at the Gunnison processing site and disposal site. Results of previous water sampling at the Gunnison processing site indicate that ground water in the alluvium is contaminated by the former uranium processing activities. Background ground water conditions have been established in the uppermost aquifer at the Gunnison disposal site. The monitor well locations provide a representative distribution of sampling points to characterize ground water quality and ground water flow conditions in the vicinity of the sites. The list of analytes has been modified with time to reflect constituents that are related to uranium processing activities and the parameters needed for geochemical evaluation

  3. Sample collection and sample analysis plan in support of the 105-C/190-C concrete and soil sampling activities

    International Nuclear Information System (INIS)

    Marske, S.G.

    1996-07-01

    This sampling and analysis plan describes the sample collection and sample analysis in support of the 105-C water tunnels and 190-C main pumphouse concrete and soil sampling activities. These analytical data will be used to identify the radiological contamination and presence of hazardous materials to support the decontamination and disposal activities

  4. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  6. UMTRA water sampling and analysis plan, Green River, Utah

    International Nuclear Information System (INIS)

    Papusch, R.

    1993-12-01

    The purpose of this water sampling and analysis plan (WSAP) is to provide a basis for groundwater and surface water sampling at the Green River Uranium Mill Tailing Remedial Action (UMTRA) Project site. This WSAP identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the monitoring locations

  7. Compatibility Grab Sampling and Analysis Plan for Fiscal Year 2001

    International Nuclear Information System (INIS)

    LAURICELLA, T.L.

    2000-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for grab samples obtained to address waste compatibility

  8. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  9. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    Science.gov (United States)

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit

  10. Nitrate Waste Treatment Sampling and Analysis Plan

    Energy Technology Data Exchange (ETDEWEB)

    Vigil-Holterman, Luciana R. [Los Alamos National Laboratory; Martinez, Patrick Thomas [Los Alamos National Laboratory; Garcia, Terrence Kerwin [Los Alamos National Laboratory

    2017-07-05

    This plan is designed to outline the collection and analysis of nitrate salt-bearing waste samples required by the New Mexico Environment Department- Hazardous Waste Bureau in the Los Alamos National Laboratory (LANL) Hazardous Waste Facility Permit (Permit).

  11. Sampling plans for pest mites on physic nut.

    Science.gov (United States)

    Rosado, Jander F; Sarmento, Renato A; Pedro-Neto, Marçal; Galdino, Tarcísio V S; Marques, Renata V; Erasmo, Eduardo A L; Picanço, Marcelo C

    2014-08-01

    The starting point for generating a pest control decision-making system is a conventional sampling plan. Because the mites Polyphagotarsonemus latus and Tetranychus bastosi are among the most important pests of the physic nut (Jatropha curcas), in the present study, we aimed to establish sampling plans for these mite species on physic nut. Mite densities were monitored in 12 physic nut crops. Based on the obtained results, sampling of P. latus and T. bastosi should be performed by assessing the number of mites per cm(2) in 160 samples using a handheld 20× magnifying glass. The optimal sampling region for T. bastosi is the abaxial surface of the 4th most apical leaf on the branch of the middle third of the canopy. On the abaxial surface, T. bastosi should then be observed on the side parts of the middle portion of the leaf, near its edge. As for P. latus, the optimal sampling region is the abaxial surface of the 4th most apical leaf on the branch of the apical third of the canopy on the abaxial surface. Polyphagotarsonemus latus should then be assessed on the side parts of the leaf's petiole insertion. Each sampling procedure requires 4 h and costs US$ 7.31.

  12. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  13. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  14. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam

    2014-05-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  15. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  16. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  17. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  18. Validation of the Simbionix PROcedure Rehearsal Studio sizing module : A comparison of software for endovascular aneurysm repair sizing and planning

    NARCIS (Netherlands)

    Velu, Juliëtte F.; Groot Jebbink, Erik; de Vries, Jean-Paul P.M.; Slump, Cornelis H.; Geelkerken, Robert H.

    2017-01-01

    An important determinant of successful endovascular aortic aneurysm repair is proper sizing of the dimensions of the aortic-iliac vessels. The goal of the present study was to determine the concurrent validity, a method for comparison of test scores, for EVAR sizing and planning of the recently

  19. UMTRA Project water sampling and analysis plan, Salt Lake City, Utah. Revision 1

    International Nuclear Information System (INIS)

    1995-06-01

    This water sampling and analysis plan describes planned, routine ground water sampling activities at the US Department of Energy Uranium Mill Tailings Remedial Action Project site in Salt Lake City, Utah. This plan identifies and justifies sampling locations, analytical parameters, detection limits, and sampling frequencies for routine monitoring of ground water, sediments, and surface waters at monitoring stations on the site

  20. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  1. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  2. Tank 241-BY-105 rotary core sampling and analysis plan

    International Nuclear Information System (INIS)

    Sasaki, L.M.

    1995-01-01

    This Sampling and Analysis Plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for two rotary-mode core samples from tank 241-BY-105 (BY-105)

  3. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  4. 105-DR Large Sodium Fire Facility decontamination, sampling, and analysis plan

    International Nuclear Information System (INIS)

    Knaus, Z.C.

    1995-01-01

    This is the decontamination, sampling, and analysis plan for the closure activities at the 105-DR Large Sodium Fire Facility at Hanford Reservation. This document supports the 105-DR Large Sodium Fire Facility Closure Plan, DOE-RL-90-25. The 105-DR LSFF, which operated from about 1972 to 1986, was a research laboratory that occupied the former ventilation supply room on the southwest side of the 105-DR Reactor facility in the 100-D Area of the Hanford Site. The LSFF was established to investigate fire fighting and safety associated with alkali metal fires in the liquid metal fast breeder reactor facilities. The decontamination, sampling, and analysis plan identifies the decontamination procedures, sampling locations, any special handling requirements, quality control samples, required chemical analysis, and data validation needed to meet the requirements of the 105-DR Large Sodium Fire Facility Closure Plan in compliance with the Resource Conservation and Recovery Act

  5. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    KAUST Repository

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  6. UMTRA Project water sampling and analysis plan, Grand Junction, Colorado. Revision 1, Version 6

    International Nuclear Information System (INIS)

    1995-09-01

    This water sampling and analysis plan describes the planned, routine ground water sampling activities at the Grand Junction US DOE Uranium Mill Tailings Remedial Action (UMTRA) Project site (GRJ-01) in Grand Junction, Colorado, and at the Cheney Disposal Site (GRJ-03) near Grand Junction. The plan identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequencies for the routine monitoring stations at the sites. Regulatory basis is in the US EPA regulations in 40 CFR Part 192 (1994) and EPA ground water quality standards of 1995 (60 FR 2854). This plan summarizes results of past water sampling activities, details water sampling activities planned for the next 2 years, and projects sampling activities for the next 5 years

  7. Sampling and Analysis Plan for PUREX canyon vessel flushing

    International Nuclear Information System (INIS)

    Villalobos, C.N.

    1995-01-01

    A sampling and analysis plan is necessary to provide direction for the sampling and analytical activities determined by the data quality objectives. This document defines the sampling and analysis necessary to support the deactivation of the Plutonium-Uranium Extraction (PUREX) facility vessels that are regulated pursuant to Washington Administrative Code 173-303

  8. UMTRA water sampling and analysis plan, Tuba City, Arizona

    International Nuclear Information System (INIS)

    1993-09-01

    The purpose of this document is to provide background, guidance, and justification for fiscal year (FY) 1994 water sampling activities for the uranium mil tailings site at Tuba City, Arizona. This sampling and analysis plan will form the basis for groundwater sampling and analysis work orders to be implemented in FY94

  9. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  10. Visual Sample Plan (VSP) Software: Designs and Data Analyses for Sampling Contaminated Buildings

    International Nuclear Information System (INIS)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Nuffer, Lisa L.; Hassig, Nancy L.

    2005-01-01

    A new module of the Visual Sample Plan (VSP) software has been developed to provide sampling designs and data analyses for potentially contaminated buildings. An important application is assessing levels of contamination in buildings after a terrorist attack. This new module, funded by DHS through the Combating Terrorism Technology Support Office, Technical Support Working Group, was developed to provide a tailored, user-friendly and visually-orientated buildings module within the existing VSP software toolkit, the latest version of which can be downloaded from http://dqo.pnl.gov/vsp. In case of, or when planning against, a chemical, biological, or radionuclide release within a building, the VSP module can be used to quickly and easily develop and visualize technically defensible sampling schemes for walls, floors, ceilings, and other surfaces to statistically determine if contamination is present, its magnitude and extent throughout the building and if decontamination has been effective. This paper demonstrates the features of this new VSP buildings module, which include: the ability to import building floor plans or to easily draw, manipulate, and view rooms in several ways; being able to insert doors, windows and annotations into a room; 3-D graphic room views with surfaces labeled and floor plans that show building zones that have separate air handing units. The paper will also discuss the statistical design and data analysis options available in the buildings module. Design objectives supported include comparing an average to a threshold when the data distribution is normal or unknown, and comparing measurements to a threshold to detect hotspots or to insure most of the area is uncontaminated when the data distribution is normal or unknown

  11. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  12. Comparison and Field Validation of Binomial Sampling Plans for Oligonychus perseae (Acari: Tetranychidae) on Hass Avocado in Southern California.

    Science.gov (United States)

    Lara, Jesus R; Hoddle, Mark S

    2015-08-01

    Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Nonradioactive Dangerous Waste Landfill sampling and analysis plan and data quality objectives process summary report

    International Nuclear Information System (INIS)

    Smith, R.C.

    1997-08-01

    This sampling and analysis plan defines the sampling and analytical activities and associated procedures that will be used to support the Nonradioactive Dangerous Waste Landfill soil-gas investigation. This SAP consists of three sections: this introduction, the field sampling plan, and the quality assurance project plan. The field sampling plan defines the sampling and analytical methodologies to be performed

  14. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  15. A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes

    Science.gov (United States)

    Bundy, Brian; Krischer, Jeffrey P.

    2016-01-01

    The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448

  16. Background Information for the Nevada National Security Site Integrated Sampling Plan, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Farnham, Irene; Marutzky, Sam

    2014-12-01

    This document describes the process followed to develop the Nevada National Security Site (NNSS) Integrated Sampling Plan (referred to herein as the Plan). It provides the Plan’s purpose and objectives, and briefly describes the Underground Test Area (UGTA) Activity, including the conceptual model and regulatory requirements as they pertain to groundwater sampling. Background information on other NNSS groundwater monitoring programs—the Routine Radiological Environmental Monitoring Plan (RREMP) and Community Environmental Monitoring Program (CEMP)—and their integration with the Plan are presented. Descriptions of the evaluations, comments, and responses of two Sampling Plan topical committees are also included.

  17. 105-N Basin sediment disposition phase-one sampling and analysis plan

    International Nuclear Information System (INIS)

    1997-01-01

    The sampling and analysis plan (SAP) for Phase 1 of the 105-N Basin sediment disposition project defines the sampling and analytical activities that will be performed for the engineering assessment phase (phase 1) of the project. A separate SAP defines the sampling and analytical activities that will be performed for the characterization phase (Phase 2) of the 105-N sediment disposition project. The Phase-1 SAP is presented in the introduction (Section 1.0), in the field sampling plan (FSP) (Section 2.0), and in the quality assurance project plan (QAPjP) (Section 3.0). The FSP defines the sampling and analytical methodologies to be performed. The QAPjP provides information on the quality assurance/quality control (QA/QC) parameters related to the sampling and analytical methodologies. This SAP defines the strategy and the methods that will be used to sample and analyze the sediment on the floor of the 105-N Basin. The resulting data will be used to develop and evaluate engineering designs for collecting and removing sediment from the basin

  18. UMTRA Project water sampling and analysis plan, Durango, Colorado. Revision 1

    International Nuclear Information System (INIS)

    1995-09-01

    Planned, routine ground water sampling activities at the US Department of Energy (DOE) Uranium Mill Tailings Remedial Action (UMTRA) Project site in Durango, Colorado, are described in this water sampling and analysis plan. The plan identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the routine monitoring stations at the site. The ground water data are used to characterize the site ground water compliance strategies and to monitor contaminants of potential concern identified in the baseline risk assessment (DOE, 1995a). Regulatory basis for routine ground water monitoring at UMTRA Project sites is derived from the US EPA regulations in 40 CFR Part 192 (1994) and EPA standards of 1995 (60 FR 2854). Sampling procedures are guided by the UMTRA Project standard operating procedures (SOP) (JEG, n.d.), the Technical Approach Document (TAD) (DOE, 1989), and the most effective technical approach for the site

  19. Hanford Sampling Quality Management Plan (HSQMP)

    International Nuclear Information System (INIS)

    Hyatt, J.E.

    1995-01-01

    This document provides a management tool for evaluating and designing the appropriate elements of a field sampling program. This document provides discussion of the elements of a program and is to be used as a guidance document during the preparation of project and/or function specific documentation. This document does not specify how a sampling program shall be organized. The HSQMP is to be used as a companion document to the Hanford Analytical Services Quality Assurance Plan (HASQAP) DOE/RL-94-55. The generation of this document was enhanced by conducting baseline evaluations of current sampling organizations. Valuable input was received from members of field and Quality Assurance organizations. The HSQMP is expected to be a living document. Revisions will be made as regulations and or Hanford Site conditions warrant changes in the best management practices. Appendices included are: summary of the sampling and analysis work flow process, a user's guide to the Data Quality Objective process, and a self-assessment checklist

  20. Sampling and Analysis Plan for the 233-S Plutonium Concentration Facility

    International Nuclear Information System (INIS)

    Mihalic, M.A.

    1998-02-01

    This Sampling and Analysis Plan (SAP) provides the information and instructions to be used for sampling and analysis activities in the 233-S Plutonium Concentration Facility. The information and instructions herein are separated into three parts and address the Data Quality Objective (DQO) Summary Report, Quality Assurance Project Plan (QAP), and SAP

  1. Tank 241-AP-104 Grab Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    TEMPLETON, A.M.

    2000-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AP-104. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AP-104 required to provide sample material to the Waste Treatment Contractor. Grab samples will be obtained from riser 001 to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives and ICD-23. The 222-S Laboratory will receive samples; composite the samples; perform chemical analyses on composite samples; and provide samples to the Waste Treatment Contractor and the Process Chemistry Laboratory. The Process Chemistry Laboratory at the 222-S Laboratory Complex will perform process tests to evaluate the behavior of the 241-AP-104 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. The Waste Treatment Contractor will perform process verification and waste form qualification tests. Requirements for analyses of samples originating in the L and H DQO process tests will be documented in the corresponding test plan (Person 2000) and are not within the scope of this SAP. This report provides the general methodology and procedures to be used in the preparation, retrieval, transport, analysis, and reporting of results from grab samples retrieved from tank 241-AP-104

  2. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  3. Evaluation of sampling schemes for in-service inspection of steam generator tubing

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1990-03-01

    This report is a follow-on of work initially sponsored by the US Nuclear Regulatory Commission (Bowen et al. 1989). The work presented here is funded by EPRI and is jointly sponsored by the Electric Power Research Institute (EPRI) and the US Nuclear Regulatory Commission (NRC). The goal of this research was to evaluate fourteen sampling schemes or plans. The main criterion used for evaluating plan performance was the effectiveness for sampling, detecting and plugging defective tubes. The performance criterion was evaluated across several choices of distributions of degraded/defective tubes, probability of detection (POD) curves and eddy-current sizing models. Conclusions from this study are dependent upon the tube defect distributions, sample size, and expansion rules considered. As degraded/defective tubes form ''clusters'' (i.e., maps 6A, 8A and 13A), the smaller sample sizes provide a capability of detecting and sizing defective tubes that approaches 100% inspection. When there is little or no clustering (i.e., maps 1A, 20 and 21), sample efficiency is approximately equal to the initial sample size taken. Thee is an indication (though not statistically significant) that the systematic sampling plans are better than the random sampling plans for equivalent initial sample size. There was no indication of an effect due to modifying the threshold value for the second stage expansion. The lack of an indication is likely due to the specific tube flaw sizes considered for the six tube maps. 1 ref., 11 figs., 19 tabs

  4. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  5. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  6. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  7. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  8. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  9. Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR

    DEFF Research Database (Denmark)

    Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio

    2014-01-01

    Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five...... pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs...... mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40...

  10. Energy-Aware Path Planning for UAS Persistent Sampling and Surveillance

    Science.gov (United States)

    Shaw-Cortez, Wenceslao

    The focus of this work is to develop an energy-aware path planning algorithm that maximizes UAS endurance, while performing sampling and surveillance missions in a known, stationary wind environment. The energy-aware aspect is specifically tailored to extract energy from the wind to reduce thrust use, thereby increasing aircraft endurance. Wind energy extraction is performed by static soaring and dynamic soaring. Static soaring involves using upward wind currents to increase altitude and potential energy. Dynamic soaring involves taking advantage of wind gradients to exchange potential and kinetic energy. The path planning algorithm developed in this work uses optimization to combine these soaring trajectories with the overarching sampling and surveillance mission. The path planning algorithm uses a simplified aircraft model to tractably optimize soaring trajectories. This aircraft model is presented and along with the derivation of the equations of motion. A nonlinear program is used to create the soaring trajectories based on a given optimization problem. This optimization problem is defined using a heuristic decision tree, which defines appropriate problems given a sampling and surveillance mission and a wind model. Simulations are performed to assess the path planning algorithm. The results are used to identify properties of soaring trajectories as well as to determine what wind conditions support minimal thrust soaring. Additional results show how the path planning algorithm can be tuned between maximizing aircraft endurance and performing the sampling and surveillance mission. A means of trajectory stitching is demonstrated to show how the periodic soaring segments can be combined together to provide a full solution to an infinite/long horizon problem.

  11. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  12. Test plan for core sampling drill bit temperature monitor

    International Nuclear Information System (INIS)

    Francis, P.M.

    1994-01-01

    At WHC, one of the functions of the Tank Waste Remediation System division is sampling waste tanks to characterize their contents. The push-mode core sampling truck is currently used to take samples of liquid and sludge. Sampling of tanks containing hard salt cake is to be performed with the rotary-mode core sampling system, consisting of the core sample truck, mobile exhauster unit, and ancillary subsystems. When drilling through the salt cake material, friction and heat can be generated in the drill bit. Based upon tank safety reviews, it has been determined that the drill bit temperature must not exceed 180 C, due to the potential reactivity of tank contents at this temperature. Consequently, a drill bit temperature limit of 150 C was established for operation of the core sample truck to have an adequate margin of safety. Unpredictable factors, such as localized heating, cause this buffer to be so great. The most desirable safeguard against exceeding this threshold is bit temperature monitoring . This document describes the recommended plan for testing the prototype of a drill bit temperature monitor developed for core sampling by Sandia National Labs. The device will be tested at their facilities. This test plan documents the tests that Westinghouse Hanford Company considers necessary for effective testing of the system

  13. Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.

    Science.gov (United States)

    Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S

    2004-01-01

    StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).

  14. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  15. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Genesis Contingency Planning and Mishap Recovery: The Sample Curation View

    Science.gov (United States)

    Stansbery, E. K.; Allton, J. H.; Allen, C. C.; McNamara, K. M.; Calaway, M.; Rodriques, M. C.

    2007-01-01

    Planning for sample preservation and curation was part of mission design from the beginning. One of the scientific objectives for Genesis included collecting samples of three regimes of the solar wind in addition to collecting bulk solar wind during the mission. Collectors were fabricated in different thicknesses for each regime of the solar wind and attached to separate frames exposed to the solar wind during specific periods of solar activity associated with each regime. The original plan to determine the solar regime sampled for specific collectors was to identify to which frame the collector was attached. However, the collectors were dislodged during the hard landing making identification by frame attachment impossible. Because regimes were also identified by thickness of the collector, the regime sampled is identified by measuring fragment thickness. A variety of collector materials and thin films applied to substrates were selected and qualified for flight. This diversity provided elemental measurement in more than one material, mitigating effects of diffusion rates and/or radiation damage. It also mitigated against different material and substrate strengths resulting in differing effects of the hard landing. For example, silicon crystal substrates broke into smaller fragments than sapphire-based substrates and diamond surfaces were more resilient to flying debris damage than gold. The primary responsibility of the curation team for recovery was process documentation. Contingency planning for the recovery phase expanded this responsibility to include not only equipment to document, but also gather, contain and identify samples from the landing area and the recovered spacecraft. The team developed contingency plans for various scenarios as part of mission planning that included topographic maps to aid in site recovery and identification of different modes of transport and purge capability depending on damage. A clean tent, set-up at Utah Test & Training Range

  17. DNA-based hair sampling to identify road crossings and estimate population size of black bears in Great Dismal Swamp National Wildlife Refuge, Virginia

    OpenAIRE

    Wills, Johnny

    2008-01-01

    The planned widening of U.S. Highway 17 along the east boundary of Great Dismal Swamp National Wildlife Refuge (GDSNWR) and a lack of knowledge about the refugeâ s bear population created the need to identify potential sites for wildlife crossings and estimate the size of the refugeâ s bear population. I collected black bear hair in order to collect DNA samples to estimate population size, density, and sex ratio, and determine road crossing locations for black bears (Ursus americanus) in G...

  18. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  20. Tank 241-U-105 push mode core sampling and analysis plan

    International Nuclear Information System (INIS)

    Bell, K.E.

    1995-01-01

    This Sampling and Analysis Plan (SAP) will identify characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for vapor samples and two push mode core samples from tank 241-U-105 (U-105)

  1. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  2. Sampling and analysis plan for the 100-D Ponds voluntary remediation project

    International Nuclear Information System (INIS)

    1996-08-01

    This Sampling and Analysis Plan (SAP) describes the sampling and analytical activities which will be performed to support closure of the 100-D Ponds Resource Conservation and Recovery Act (RCRA) treatment, storage, and/or disposal (TSD) unit. This SAP includes the Field Sampling Plan (FSP) presented in Section 2.0, and the Quality Assurance Project Plan (QAPjP) described in Section 3.0. The FSP defines the sampling and analytical methodologies to be performed, and the QAPjP provides or includes information on the requirements for precision, accuracy, representativeness, comparability, and completeness of the analytical data. This sampling and analysis plan was developed using the Environmental Protection Agency's Seven-Step Data Quality Objectives (DQO) Guidance (EPA, 1994). The purpose of the DQO meetings was (1) to identify the contaminants of concern and their cleanup levels under the Washington State Model Toxics Control Act (MTCA, WAC-173-340) Method B, and (2) to determine the number and locations of samples necessary to verify that the 100-D Ponds meet the cleanup criteria. The data collected will be used to support RCRA closure of this TSD unit

  3. Spatial resolution of 2D ionization chamber arrays for IMRT dose verification: single-detector size and sampling step width

    International Nuclear Information System (INIS)

    Poppe, Bjoern; Djouguela, Armand; Blechschmidt, Arne; Willborn, Kay; Ruehmann, Antje; Harder, Dietrich

    2007-01-01

    The spatial resolution of 2D detector arrays equipped with ionization chambers or diodes, used for the dose verification of IMRT treatment plans, is limited by the size of the single detector and the centre-to-centre distance between the detectors. Optimization criteria with regard to these parameters have been developed by combining concepts of dosimetry and pattern analysis. The 2D-ARRAY Type 10024 (PTW-Freiburg, Germany), single-chamber cross section 5 x 5 mm 2 , centre-to-centre distance between chambers in each row and column 10 mm, served as an example. Additional frames of given dose distributions can be taken by shifting the whole array parallel or perpendicular to the MLC leaves by, e.g., 5 mm. The size of the single detector is characterized by its lateral response function, a trapezoid with 5 mm top width and 9 mm base width. Therefore, values measured with the 2D array are regarded as sample values from the convolution product of the accelerator generated dose distribution and this lateral response function. Consequently, the dose verification, e.g., by means of the gamma index, is performed by comparing the measured values of the 2D array with the values of the convolution product of the treatment planning system (TPS) calculated dose distribution and the single-detector lateral response function. Sufficiently small misalignments of the measured dose distributions in comparison with the calculated ones can be detected since the lateral response function is symmetric with respect to the centre of the chamber, and the change of dose gradients due to the convolution is sufficiently small. The sampling step width of the 2D array should provide a set of sample values representative of the sampled distribution, which is achieved if the highest spatial frequency contained in this function does not exceed the 'Nyquist frequency', one half of the sampling frequency. Since the convolution products of IMRT-typical dose distributions and the single

  4. Optimal integrated sizing and planning of hubs with midsize/large CHP units considering reliability of supply

    International Nuclear Information System (INIS)

    Moradi, Saeed; Ghaffarpour, Reza; Ranjbar, Ali Mohammad; Mozaffari, Babak

    2017-01-01

    Highlights: • New hub planning formulation is proposed to exploit assets of midsize/large CHPs. • Linearization approaches are proposed for two-variable nonlinear CHP fuel function. • Efficient operation of addressed CHPs & hub devices at contingencies are considered. • Reliability-embedded integrated planning & sizing is formulated as one single MILP. • Noticeable results for costs & reliability-embedded planning due to mid/large CHPs. - Abstract: Use of multi-carrier energy systems and the energy hub concept has recently been a widespread trend worldwide. However, most of the related researches specialize in CHP systems with constant electricity/heat ratios and linear operating characteristics. In this paper, integrated energy hub planning and sizing is developed for the energy systems with mid-scale and large-scale CHP units, by taking their wide operating range into consideration. The proposed formulation is aimed at taking the best use of the beneficial degrees of freedom associated with these units for decreasing total costs and increasing reliability. High-accuracy piecewise linearization techniques with approximation errors of about 1% are introduced for the nonlinear two-dimensional CHP input-output function, making it possible to successfully integrate the CHP sizing. Efficient operation of CHP and the hub at contingencies is extracted via a new formulation, which is developed to be incorporated to the planning and sizing problem. Optimal operation, planning, sizing and contingency operation of hub components are integrated and formulated as a single comprehensive MILP problem. Results on a case study with midsize CHPs reveal a 33% reduction in total costs, and it is demonstrated that the proposed formulation ceases the need for additional components/capacities for increasing reliability of supply.

  5. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  6. Spatial Distribution and Sampling Plans With Fixed Level of Precision for Citrus Aphids (Hom., Aphididae) on Two Orange Species.

    Science.gov (United States)

    Kafeshani, Farzaneh Alizadeh; Rajabpour, Ali; Aghajanzadeh, Sirous; Gholamian, Esmaeil; Farkhari, Mohammad

    2018-04-02

    Aphis spiraecola Patch, Aphis gossypii Glover, and Toxoptera aurantii Boyer de Fonscolombe are three important aphid pests of citrus orchards. In this study, spatial distributions of the aphids on two orange species, Satsuma mandarin and Thomson navel, were evaluated using Taylor's power law and Iwao's patchiness. In addition, a fixed-precision sequential sampling plant was developed for each species on the host plant by Green's model at precision levels of 0.25 and 0.1. The results revealed that spatial distribution parameters and therefore the sampling plan were significantly different according to aphid and host plant species. Taylor's power law provides a better fit for the data than Iwao's patchiness regression. Except T. aurantii on Thomson navel orange, spatial distribution patterns of the aphids were aggregative on both citrus. T. aurantii had regular dispersion pattern on Thomson navel orange. Optimum sample size of the aphids varied from 30-2061 and 1-1622 shoots on Satsuma mandarin and Thomson navel orange based on aphid species and desired precision level. Calculated stop lines of the aphid species on Satsuma mandarin and Thomson navel orange ranged from 0.48 to 19 and 0.19 to 80.4 aphids per 24 shoots according to aphid species and desired precision level. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans (RVSP) software. This sampling program is useful for IPM program of the aphids in citrus orchards.

  7. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  8. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  9. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  10. FUZZY ACCEPTANCE SAMPLING AND CHARACTERISTIC CURVES

    Directory of Open Access Journals (Sweden)

    Ebru Turano?lu

    2012-02-01

    Full Text Available Acceptance sampling is primarily used for the inspection of incoming or outgoing lots. Acceptance sampling refers to the application of specific sampling plans to a designated lot or sequence of lots. The parameters of acceptance sampling plans are sample sizes and acceptance numbers. In some cases, it may not be possible to define acceptance sampling parameters as crisp values. These parameters can be expressed by linguistic variables. The fuzzy set theory can be successfully used to cope with the vagueness in these linguistic expressions for acceptance sampling. In this paper, the main distributions of acceptance sampling plans are handled with fuzzy parameters and their acceptance probability functions are derived. Then the characteristic curves of acceptance sampling are examined under fuzziness. Illustrative examples are given.

  11. Sampling and Analysis Plan for the 221-U Facility

    International Nuclear Information System (INIS)

    Rugg, J.E.

    1998-02-01

    This sampling and analysis plan (SAP) presents the rationale and strategy for the sampling and analysis activities proposed to be conducted to support the evaluation of alternatives for the final disposition of the 221-U Facility. This SAP will describe general sample locations and the minimum number of samples required. It will also identify the specific contaminants of potential concern (COPCs) and the required analysis. This SAP does not define the exact sample locations and equipment to be used in the field due to the nature of unknowns associated with the 221-U Facility

  12. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  13. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  14. In situ sampling cart development engineering task plan

    International Nuclear Information System (INIS)

    DeFord, D.K.

    1995-01-01

    This Engineering Task Plan (ETP) supports the development for facility use of the next generation in situ sampling system for characterization of tank vapors. In situ sampling refers to placing sample collection devices (primarily sorbent tubes) directly into the tank headspace, then drawing tank gases through the collection devices to obtain samples. The current in situ sampling system is functional but was not designed to provide the accurate flow measurement required by today's data quality objectives (DQOs) for vapor characterization. The new system will incorporate modern instrumentation to achieve much tighter control. The next generation system will be referred to in this ETP as the New In Situ System (NISS) or New System. The report describes the current sampling system and the modifications that are required for more accuracy

  15. Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions

    International Nuclear Information System (INIS)

    John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.

    2000-01-01

    Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was

  16. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  17. Sampling Plans for the Thrips Frankliniella schultzei (Thysanoptera: Thripidae) in Three Lettuce Varieties.

    Science.gov (United States)

    Silva, Alisson R; Rodrigues-Silva, Nilson; Pereira, Poliana S; Sarmento, Renato A; Costa, Thiago L; Galdino, Tarcísio V S; Picanço, Marcelo C

    2017-12-05

    The common blossom thrips, Frankliniella schultzei Trybom (Thysanoptera: Thripidae), is an important lettuce pest worldwide. Conventional sampling plans are the first step in implementing decision-making systems into integrated pest management programs. However, this tool is not available for F. schultzei infesting lettuce crops. Thus, the objective of this work was to develop a conventional sampling plan for F. schultzei in lettuce crops. Two sampling techniques (direct counting and leaf beating on a white plastic tray) were compared in crisphead, looseleaf, and Boston lettuce varieties before and during head formation. The frequency distributions of F. schultzei densities in lettuce crops were assessed, and the number of samples required to compose the sampling plan was determined. Leaf beating on a white plastic tray was the best sampling technique. F. schultzei densities obtained with this technique were fitted to the negative binomial distribution with a common aggregation parameter (common K = 0.3143). The developed sampling plan is composed of 91 samples per field and presents low errors in its estimates (up to 20%), fast execution time (up to 47 min), and low cost (up to US $1.67 per sampling area). This sampling plan can be used as a tool for integrated pest management in lettuce crops, assisting with reliable decision making in different lettuce varieties before and during head formation. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Group SkSP-R sampling plan for accelerated life tests

    Indian Academy of Sciences (India)

    Muhammad Aslam

    2017-09-15

    Sep 15, 2017 ... SkSP-R sampling; life test; Weibull distribution; producer's risk; ... designed a sampling plan under a time-truncated life test .... adjusted using an acceleration factor. ... where P is the probability of lot acceptance for a single.

  19. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  20. An Internationally Coordinated Science Management Plan for Samples Returned from Mars

    Science.gov (United States)

    Haltigin, T.; Smith, C. L.

    2015-12-01

    Mars Sample Return (MSR) remains a high priority of the planetary exploration community. Such an effort will undoubtedly be too large for any individual agency to conduct itself, and thus will require extensive global cooperation. To help prepare for an eventual MSR campaign, the International Mars Exploration Working Group (IMEWG) chartered the international Mars Architecture for the Return of Samples (iMARS) Phase II working group in 2014, consisting of representatives from 17 countries and agencies. The overarching task of the team was to provide recommendations for progressing towards campaign implementation, including a proposed science management plan. Building upon the iMARS Phase I (2008) outcomes, the Phase II team proposed the development of an International MSR Science Institute as part of the campaign governance, centering its deliberations around four themes: Organization: including an organizational structure for the Institute that outlines roles and responsibilities of key members and describes sample return facility requirements; Management: presenting issues surrounding scientific leadership, defining guidelines and assumptions for Institute membership, and proposing a possible funding model; Operations & Data: outlining a science implementation plan that details the preliminary sample examination flow, sample allocation process, and data policies; and Curation: introducing a sample curation plan that comprises sample tracking and routing procedures, sample sterilization considerations, and long-term archiving recommendations. This work presents a summary of the group's activities, findings, and recommendations, highlighting the role of international coordination in managing the returned samples.

  1. A model-based approach to sample size estimation in recent onset type 1 diabetes.

    Science.gov (United States)

    Bundy, Brian N; Krischer, Jeffrey P

    2016-11-01

    The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  3. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  4. Sampling plan to support HLW tank 16

    International Nuclear Information System (INIS)

    Rodwell, P.O.; Martin, B.

    1997-01-01

    Plans are to remove the residual waste from the annulus of High-Level Waste Tank 16, located in the H-Area Tank Farm, in 1998. The interior of the tank is virtually clean. In the late 1970's, the waste was removed from the interior of the tank by several campaigns of waste removal with slurry pumps, spray washing, and oxalic acid cleaning. The annulus of the tank at one time had several thousand gallons of waste salt, which had leaked from the tank interior. Some of this salt was removed by adding water to the annulus and circulating, but much of the salt remains in the annulus. In order to confirm the source term used for fate and transport modeling, samples of the tank interior and annulus will be obtained and analyzed. If the results of the analyses indicate that the data used for the initial modeling is bounding then no changes will be made to the model. However, if the results indicate that the source term is higher than that assumed in the initial modeling, thus not bounding, additional modeling will be performed. The purpose of this Plan is to outline the approach to sampling the annulus and interior of Tank 16 as a prerequisite to salt removal in the annulus and closure of the entire tank system. The sampling and analysis of this tank system must be robust to reasonably ensure the actual tank residual is within the bounds of analysis error

  5. Tank 241-AZ-102 Privatization Push Mode Core Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    TEMPLETON, A.M.

    1999-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AZ-102

  6. Tank 241-AZ-102 Privatization Push Mode Core Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    RASMUSSEN, J.H.

    2000-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AZ-102

  7. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  8. Abbreviated sampling and analysis plan for planning decontamination and decommissioning at Test Reactor Area (TRA) facilities

    International Nuclear Information System (INIS)

    1994-10-01

    The objective is to sample and analyze for the presence of gamma emitting isotopes and hazardous constituents within certain areas of the Test Reactor Area (TRA), prior to D and D activities. The TRA is composed of three major reactor facilities and three smaller reactors built in support of programs studying the performance of reactor materials and components under high neutron flux conditions. The Materials Testing Reactor (MTR) and Engineering Test Reactor (ETR) facilities are currently pending D/D. Work consists of pre-D and D sampling of designated TRA (primarily ETR) process areas. This report addresses only a limited subset of the samples which will eventually be required to characterize MTR and ETR and plan their D and D. Sampling which is addressed in this document is intended to support planned D and D work which is funded at the present time. Biased samples, based on process knowledge and plant configuration, are to be performed. The multiple process areas which may be potentially sampled will be initially characterized by obtaining data for upstream source areas which, based on facility configuration, would affect downstream and as yet unsampled, process areas. Sampling and analysis will be conducted to determine the level of gamma emitting isotopes and hazardous constituents present in designated areas within buildings TRA-612, 642, 643, 644, 645, 647, 648, 663; and in the soils surrounding Facility TRA-611. These data will be used to plan the D and D and help determine disposition of material by D and D personnel. Both MTR and ETR facilities will eventually be decommissioned by total dismantlement so that the area can be restored to its original condition

  9. IP Sample Plan #5 | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    A sample Intellectual Property Management Plan in the form of a legal agreement between a University and its collaborators which addresses data sharing, sharing of research tools and resources and intellectual property management.

  10. Test plan for evaluating the performance of the in-tank fluidic sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    The PHMC will provide Low Activity Wastes (LAW) tank wastes for final treatment by a privatization contractor from double-shell feed tanks, 241-AP-102 and 241-AP-104, Concerns about the inability of the baseline ''grab'' sampling to provide large volume samples within time constraints has led to the development of a conceptual sampling system that would be deployed in a feed tank riser, This sampling system will provide large volume, representative samples without the environmental, radiation exposure, and sample volume impacts of the current base-line ''grab'' sampling method. This test plan identifies ''proof-of-principle'' cold tests for the conceptual sampling system using simulant materials. The need for additional testing was identified as a result of completing tests described in the revision test plan document, Revision 1 outlines tests that will evaluate the performance and ability to provide samples that are representative of a tanks' content within a 95 percent confidence interval, to recovery from plugging, to sample supernatant wastes with over 25 wt% solids content, and to evaluate the impact of sampling at different heights within the feed tank. The test plan also identifies operating parameters that will optimize the performance of the sampling system

  11. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  12. Tank 241-AZ-102 Privatization Push Mode Core Sampling and Analysis Plan; FINAL

    International Nuclear Information System (INIS)

    TEMPLETON, A.M.

    1999-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AZ-102. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AZ-102. Push mode core samples will be obtained from risers 15C and 24A to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples, composite the liquids and solids, perform chemical analyses, and provide subsamples to the Process Chemistry Laboratory. The Process Chemistry Laboratory will prepare test plans and perform process tests to evaluate the behavior of the 241-AZ-102 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. Requirements for analyses of samples originating in the process tests will be documented in the corresponding test plan

  13. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  14. Soil Sampling Plan for the transuranic storage area soil overburden and final report: Soil overburden sampling at the RWMC transuranic storage area

    International Nuclear Information System (INIS)

    Stanisich, S.N.

    1994-12-01

    This Soil Sampling Plan (SSP) has been developed to provide detailed procedural guidance for field sampling and chemical and radionuclide analysis of selected areas of soil covering waste stored at the Transuranic Storage Area (TSA) at the Idaho National Engineering Laboratory's (INEL) Radioactive Waste Management Complex (RWMC). The format and content of this SSP represents a complimentary hybrid of INEL Waste Management--Environmental Restoration Program, and Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) Remedial Investigation/Feasibility Study (RI/FS) sampling guidance documentation. This sampling plan also functions as a Quality Assurance Project Plan (QAPP). The QAPP as a controlling mechanism during sampling to ensure that all data collected are valid, reliabile, and defensible. This document outlines organization, objectives and quality assurance/quality control (QA/QC) activities to achieve the desired data quality goals. The QA/QC requirements for this project are outlined in the Data Collection Quality Assurance Plan (DCQAP) for the Buried Waste Program. The DCQAP is a program plan and does not outline the site specific requirements for the scope of work covered by this SSP

  15. A novel derivation of a within-batch sampling plan based on a Poisson-gamma model characterising low microbial counts in foods.

    Science.gov (United States)

    Gonzales-Barron, Ursula; Zwietering, Marcel H; Butler, Francis

    2013-02-01

    This study proposes a novel step-wise methodology for the derivation of a sampling plan by variables for food production systems characterised by relatively low concentrations of the inspected microorganism. After representing the universe of contaminated batches by modelling the between-batch and within-batch variability in microbial counts, a tolerance criterion defining batch acceptability (i.e., up to a tolerance percentage of the food units having microbial concentrations lower or equal to a critical concentration) is established to delineate a limiting quality contour that separates satisfactory from unsatisfactory batches. The problem consists then of finding the optimum decision criterion - arithmetic mean of the analytical results (microbiological limit, m(L)) and the sample size (n) - that satisfies a pre-defined level of confidence measured on the samples' mean distributions from all possible true within-batch distributions. This is approached by obtaining decision landscape curves representing collectively the conditional and joint producer's and consumer's risks at different microbiological limits along with confidence intervals representing uncertainty due to the propagated between-batch variability. Whilst the method requires a number of risk management decisions to be made such as the objective of the sampling plan (GMP-based or risk-based), the modality of derivation, the tolerance criterion or level of safety, and the statistical level of confidence, the proposed method can be used when past monitoring data are available so as to produce statistically-sound dynamic sampling plans with optimised efficiency and discriminatory power. For the illustration of Enterobacteriaceae concentrations on Irish sheep carcasses, a sampling regime of n=10 and m(L)=17.5CFU/cm(2) is recommended to ensure that the producer has at least a 90% confidence of accepting a satisfactory batch whilst the consumer at least a 97.5% confidence that a batch will not be

  16. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  17. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  18. Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites

    International Nuclear Information System (INIS)

    2012-01-01

    This plan incorporates U.S. Department of Energy (DOE) Office of Legacy Management (LM) standard operating procedures (SOPs) into environmental monitoring activities and will be implemented at all sites managed by LM. This document provides detailed procedures for the field sampling teams so that samples are collected in a consistent and technically defensible manner. Site-specific plans (e.g., long-term surveillance and maintenance plans, environmental monitoring plans) document background information and establish the basis for sampling and monitoring activities. Information will be included in site-specific tabbed sections to this plan, which identify sample locations, sample frequencies, types of samples, field measurements, and associated analytes for each site. Additionally, within each tabbed section, program directives will be included, when developed, to establish additional site-specific requirements to modify or clarify requirements in this plan as they apply to the corresponding site. A flowchart detailing project tasks required to accomplish routine sampling is displayed in Figure 1. LM environmental procedures are contained in the Environmental Procedures Catalog (LMS/PRO/S04325), which incorporates American Society for Testing and Materials (ASTM), DOE, and U.S. Environmental Protection Agency (EPA) guidance. Specific procedures used for groundwater and surface water monitoring are included in Appendix A. If other environmental media are monitored, SOPs used for air, soil/sediment, and biota monitoring can be found in the site-specific tabbed sections in Appendix D or in site-specific documents. The procedures in the Environmental Procedures Catalog are intended as general guidance and require additional detail from planning documents in order to be complete; the following sections fulfill that function and specify additional procedural requirements to form SOPs. Routine revision of this Sampling and Analysis Plan will be conducted annually at the

  19. Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-10-24

    This plan incorporates U.S. Department of Energy (DOE) Office of Legacy Management (LM) standard operating procedures (SOPs) into environmental monitoring activities and will be implemented at all sites managed by LM. This document provides detailed procedures for the field sampling teams so that samples are collected in a consistent and technically defensible manner. Site-specific plans (e.g., long-term surveillance and maintenance plans, environmental monitoring plans) document background information and establish the basis for sampling and monitoring activities. Information will be included in site-specific tabbed sections to this plan, which identify sample locations, sample frequencies, types of samples, field measurements, and associated analytes for each site. Additionally, within each tabbed section, program directives will be included, when developed, to establish additional site-specific requirements to modify or clarify requirements in this plan as they apply to the corresponding site. A flowchart detailing project tasks required to accomplish routine sampling is displayed in Figure 1. LM environmental procedures are contained in the Environmental Procedures Catalog (LMS/PRO/S04325), which incorporates American Society for Testing and Materials (ASTM), DOE, and U.S. Environmental Protection Agency (EPA) guidance. Specific procedures used for groundwater and surface water monitoring are included in Appendix A. If other environmental media are monitored, SOPs used for air, soil/sediment, and biota monitoring can be found in the site-specific tabbed sections in Appendix D or in site-specific documents. The procedures in the Environmental Procedures Catalog are intended as general guidance and require additional detail from planning documents in order to be complete; the following sections fulfill that function and specify additional procedural requirements to form SOPs. Routine revision of this Sampling and Analysis Plan will be conducted annually at the

  20. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    Science.gov (United States)

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  2. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  3. Spatial Distribution and Sampling Plans for Grapevine Plant Canopy-Inhabiting Scaphoideus titanus (Hemiptera: Cicadellidae) Nymphs.

    Science.gov (United States)

    Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann

    2016-04-01

    The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.

  4. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  5. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  6. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Sampling and Analysis Plan for K Basins Debris

    International Nuclear Information System (INIS)

    WESTCOTT, J.L.

    2000-01-01

    This Sampling and Analysis Plan presents the rationale and strategy for sampling and analysis activities to support removal of debris from the K-East and K-West Basins located in the 100K Area at the Hanford Site. This project is focused on characterization to support waste designation for disposal of waste at the Environmental Restoration Disposal Facility (ERDF). This material has previously been dispositioned at the Hanford Low-Level Burial Grounds or Central Waste Complex. The structures that house the basins are classified as radioactive material areas. Therefore, all materials removed from the buildings are presumed to be radioactively contaminated. Because most of the materials that will be addressed under this plan will be removed from the basins, and because of the cost associated with screening materials for release, it is anticipated that all debris will be managed as low-level waste. Materials will be surveyed, however, to estimate radionuclide content for disposal and to determine that the debris is not contaminated with levels of transuranic radionuclides that would designate the debris as transuranic waste

  8. The Dilemma and Way-Out of Urban and Rural Planning Management in China’s Small and Medium-Sized Cities

    Institute of Scientific and Technical Information of China (English)

    Dawei; WANG; Yandong; WANG

    2014-01-01

    In China, for small and medium-sized cities, urban and rural planning management should play an important role during the process of urbanization. However, it failed to do that in reality due to a series of limits, such as local fiscal deficiency, scarce human resources, incomplete management systems, historic planning defects, inadequate supervisions, and imperfect regulations, etc. This paper made a comprehensive analysis on the dilemma of urban and rural planning management in China’s small and medium-sized cities and the interests and status of the government, enterprises and public in space resource allocation and put forward the methods to improve the quality of planning management in China’s small and medium-sized cities from the view of systems and mechanisms.

  9. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  10. Sampling and Analysis Plan for the 216-A-29 Ditch

    International Nuclear Information System (INIS)

    Petersen, S.W.

    1998-06-01

    This sampling and analysis plan defines procedures to be used for collecting and handling samples to be obtained from the 216-A-29 Ditch, and identifies requirements for field and laboratory measurements. The sampling strategy describes here is derived from a Data Quality Objectives workshop conducted in January 1997 to support sampling to assure worker safety during construction and to assess the validity of a 1988 ditch sampling campaign and the effectiveness of subsequent stabilization. The purpose of the proposed sampling and analysis activities is to characterize soil contamination in the vicinity of a proposed road over the 216-A-29 Ditch

  11. Sampling and analysis plan for the former Atomic Energy Commission bus lot property

    International Nuclear Information System (INIS)

    Nielson, R.R.

    1998-07-01

    This sampling and analysis plan (SAP) presents the rationale and strategy for the sampling and analysis activities proposed in support of an initial investigation of the former Atomic Energy Commission (AEC) bus lot property currently owned by Battelle Memorial Institute. The purpose of the proposed sampling and analysis activity is to investigate the potential for contamination above established action levels. The SAP will provide defensible data of sufficient quality and quantity to support recommendations of whether any further action within the study area is warranted. To assist in preparing sampling plans and reports, the Washington State Department of Ecology (Ecology) has published Guidance on Sampling and Data Analysis Methods. To specifically address sampling plans for petroleum-contaminated sites, Ecology has also published Guidance for Remediation of Petroleum Contaminated Sites. Both documents were used as guidance in preparing this plan. In 1992, a soil sample was taken within the current study area as part of a project to remove two underground storage tanks (USTs) at Battelle's Sixth Street Warehouse Petroleum Dispensing Station (Section 1.3). The results showed that the sample contained elevated levels of total petroleum hydrocarbons (TPH) in the heavy distillate range. This current study was initiated in part as a result of that discovery. The following topics are considered: the historical background of the site, current site conditions, previous investigations performed at the site, an evaluation based on the available data, and the contaminants of potential concern (COPC)

  12. Sampling and analysis plan for the former Atomic Energy Commission bus lot property

    Energy Technology Data Exchange (ETDEWEB)

    Nielson, R.R.

    1998-07-01

    This sampling and analysis plan (SAP) presents the rationale and strategy for the sampling and analysis activities proposed in support of an initial investigation of the former Atomic Energy Commission (AEC) bus lot property currently owned by Battelle Memorial Institute. The purpose of the proposed sampling and analysis activity is to investigate the potential for contamination above established action levels. The SAP will provide defensible data of sufficient quality and quantity to support recommendations of whether any further action within the study area is warranted. To assist in preparing sampling plans and reports, the Washington State Department of Ecology (Ecology) has published Guidance on Sampling and Data Analysis Methods. To specifically address sampling plans for petroleum-contaminated sites, Ecology has also published Guidance for Remediation of Petroleum Contaminated Sites. Both documents were used as guidance in preparing this plan. In 1992, a soil sample was taken within the current study area as part of a project to remove two underground storage tanks (USTs) at Battelle`s Sixth Street Warehouse Petroleum Dispensing Station (Section 1.3). The results showed that the sample contained elevated levels of total petroleum hydrocarbons (TPH) in the heavy distillate range. This current study was initiated in part as a result of that discovery. The following topics are considered: the historical background of the site, current site conditions, previous investigations performed at the site, an evaluation based on the available data, and the contaminants of potential concern (COPC).

  13. IP Sample Plan #1 | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    Sample letter that shows how Universities including co-investigators, consultants, and collaborators can describe a data and research tool sharing plan and procedures for exercising intellectual property rights. The letter is to be used as part of the University's application. 

  14. Ground-water sample collection and analysis plan for the ground-water surveillance project

    International Nuclear Information System (INIS)

    Bryce, R.W.; Evans, J.C.; Olsen, K.B.

    1991-12-01

    The Pacific Northwest Laboratory performs ground-water sampling activities at the US Department of Energy's (DOE's) Hanford Site in support of DOE's environmental surveillance responsibilities. The purpose of this document is to translate DOE's General Environmental Protection Program (DOE Order 5400.1) into a comprehensive ground-water sample collection and analysis plan for the Hanford Site. This sample collection and analysis plan sets forth the environmental surveillance objectives applicable to ground water, identifies the strategy for selecting sample collection locations, and lists the analyses to be performed to meet those objectives

  15. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  16. Liquid effluent Sampling and Analysis Plan (SAP) implementation summary report

    International Nuclear Information System (INIS)

    Lueck, K.J.

    1995-01-01

    This report summarizes liquid effluent analytical data collected during the Sampling and Analysis Plan (SAP) Implementation Program, evaluates whether or not the sampling performed meets the requirements of the individual SAPs, compares the results to the WAC 173-200 Ground Water Quality Standards. Presented in the report are results from liquid effluent samples collected (1992-1994) from 18 of the 22 streams identified in the Consent Order (No. DE 91NM-177) requiring SAPs

  17. Planning Considerations Related to Collecting and Analyzing Samples of the Martian Soils

    Science.gov (United States)

    Liu, Yang; Mellon, Mike T.; Ming, Douglas W.; Morris, Richard V.; Noble, Sarah K.; Sullivan, Robert J.; Taylor, Lawrence A.; Beaty, David W.

    2014-01-01

    The Mars Sample Return (MSR) End-to-End International Science Analysis Group (E2E-iSAG [1]) established scientific objectives associ-ated with Mars returned-sample science that require the return and investigation of one or more soil samples. Soil is defined here as loose, unconsolidated materials with no implication for the presence or absence of or-ganic components. The proposed Mars 2020 (M-2020) rover is likely to collect and cache soil in addition to rock samples [2], which could be followed by future sample retrieval and return missions. Here we discuss key scientific consid-erations for sampling and caching soil samples on the proposed M-2020 rover, as well as the state in which samples would need to be preserved when received by analysts on Earth. We are seeking feedback on these draft plans as input to mission requirement formulation. A related planning exercise on rocks is reported in an accompanying abstract [3].

  18. The Impact of Deferred Tax Assets, Discretionary Accrual, Leverage, Company Size and Tax Planning Onearnings Management Practices

    Directory of Open Access Journals (Sweden)

    Jacobus Widiatmoko

    2016-04-01

    Full Text Available The purpose of this study is to analyze and provide empirical evidence of the influence of deferred tax asset, discretionary accrual, leverage, company size, and tax planning on earnings management. Financial performance is an indicator that is required by company management to measure the effectiveness of company performance. This research used secondary data that was got from annual report published in www.idx.co.id and data from Indonesian Capital Market Directory (ICMD. Populations of the research are manufacturing companies listed on Indonesia Stock Exchange from 2011-2013. Samples were selected by using purposive sampling method. There are 208 observations that will examined by logistic regression analysis. The result shows that deferred tax asset has negative and not significant effect to the earnings management, discretionary accrual has negative and not significant effect to the earnings management, leverage has negative and significant effect to the earnings management, company size has positive and significant effect to the earnings management, tax planning has positive and not significant effect to the earnings management.Tujuan penelitian ini menganalisis bukti empiris mengenai pengaruh asset pajak tangguhan, discretionary accrual, leverage, ukuran perusahaan, dan perencanaan pajak terhadap manajemen laba. Kinerja keuangan adalah indikator untuk mengukur efektivitas perusahaan. Penelitian ini menggunakan data sekunder yang diperoleh dari www.idx.co.id serta data dari Indonesian Capital Market Directory (ICMD. Populasi penelitian ini adalah perusahaan manufaktur yang terdaftar di BEI tahun 2011-2013. Sampel dipilih dengan purposive sampling. Terdapat 208 observasi yang akan diuji dengan model analisis regresi logistik. Hasil penelitian ini menunjukkan bahwa asset pajak tangguhan memiliki pengaruh negatif dan tidak signifikan terhadap praktik manajemen laba, discretionary accrual memiliki pengaruh negatif dan tidak signifikan terhadap

  19. IP Sample Plan #3 | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    Sample Research Resources and Intellectual Property Plan for use by an Institution and its Collaborators for intellectual property protection strategies covering pre-existing intellectual property, agreements with commercial sources, privacy, and licensing.  | [google6f4cd5334ac394ab.html

  20. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  1. UMTRA Project water sampling and analysis plan, Canonsburg, Pennsylvania. Revision 1

    International Nuclear Information System (INIS)

    1995-09-01

    Surface remedial action was completed at the US Department of Energy (DOE) Canonsburg and Burrell Uranium Mill Tailings Remedial Action (UMTRA) Project sites in southwestern Pennsylvania in 1985 and 1987, respectively. The Burrell disposal site, included in the UMTRA Project as a vicinity property, was remediated in conjunction with the remedial action at Canonsburg. On 27 May 1994, the Nuclear Regulatory Commission (NRC) accepted the DOE final Long-Term Surveillance Plan (LTSP) (DOE, 1993) for Burrell thus establishing the site under the general license in 10 CFR section 40.27 (1994). In accordance with the DOE guidance document for long-term surveillance (DOE, 1995), all NRC/DOE interaction on the Burrell site's long-term care now is conducted with the DOE Grand Junction Projects Office in Grand Junction, Colorado, and is no longer the responsibility of the DOE UMTRA Project Team in Albuquerque, New Mexico. Therefore, the planned sampling activities described in this water sampling and analysis plan (WSAP) are limited to the Canonsburg site. This WSAP identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequencies for routine monitoring at the Canonsburg site for calendar years 1995 and 1996. Currently, the analytical data further the site characterization and demonstrate that the disposal cell's initial performance is in accordance with design requirements

  2. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  3. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  4. Sampling and analysis plan for Wayne Interim Storage Site (WISS), Wayne, New Jersey

    International Nuclear Information System (INIS)

    Brown, K.S.; Murray, M.E.; Rodriguez, R.E.

    1998-10-01

    This field sampling plan describes the methodology to perform an independent radiological verification survey and chemical characterization of a remediated area of the subpile at the Wayne Interim Storage Site, Wayne, New Jersey.Data obtained from collection and analysis of systematic and biased soil samples will be used to assess the status of remediation at the site and verify the final radiological status. The objective of this plan is to describe the methods for obtaining sufficient and valid measurements and analytical data to supplement and verify a radiological profile already established by the Project Remediation Management Contractor (PMC). The plan describes the procedure for obtaining sufficient and valid analytical data on soil samples following remediation of the first layer of the subpile. Samples will be taken from an area of the subpile measuring approximately 30 m by 80 m from which soil has been excavated to a depth of approximately 20 feet to confirm that the soil beneath the excavated area does not exceed radiological guidelines established for the site or chemical regulatory limits for inorganic metals. After the WISS has been fully remediated, the Department of Energy will release it for industrial/commercial land use in accordance with the Record of Decision. This plan provides supplemental instructions to guidelines and procedures established for sampling and analysis activities. Procedures will be referenced throughout this plan as applicable, and are available for review if necessary

  5. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  6. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  7. IP Sample Plan #4 | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    Sample letter from Research Institutes and their principal investigator and consultants, describing a data and research tool sharing plan and procedures for sharing data, research materials, and patent and licensing of intellectual property. This letter is designed to be included as part of an application.

  8. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  9. Operable Unit 3-13, Group 3, Other Surface Soils (Phase II) Field Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    G. L. Schwendiman

    2006-07-27

    This Field Sampling Plan describes the Operable Unit 3-13, Group 3, Other Surface Soils, Phase II remediation field sampling activities to be performed at the Idaho Nuclear Technology and Engineering Center located within the Idaho National Laboratory Site. Sampling activities described in this plan support characterization sampling of new sites, real-time soil spectroscopy during excavation, and confirmation sampling that verifies that the remedial action objectives and remediation goals presented in the Final Record of Decision for Idaho Nuclear Technology and Engineering Center, Operable Unit 3-13 have been met.

  10. 105-F and DR Phase 1 Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    Curry, L.R.

    1998-06-01

    This SAP presents the rationale and strategy for characterization of specific rooms within the 105-F and 105-DR reactor buildings. Figures 1-1 and 1-2 identify the rooms that are the subject of this SAP. These rooms are to be decontaminated and demolished as an initial step (Phase 1 ) in the Interim Safe Storage process for these reactors. Section 1.0 presents the background and sites history for the reactor buildings and summarizes the data quality objective process, which provides the logical basis for this SAP. Preliminary surveys indicate that little radiochemical contamination is present. Section 2.0 presents the quality assurance project plan, which includes a project management structure, sampling methods and quality control, and oversight of the sampling process. Section 2.2.1 summarizes the sampling methods, reflecting the radiological and chemical sampling designs presented in Tables 1-17 and 1-18. Section 3.0 presents the Field Sampling Plan for Phase 1. The sampling design is broken into two stages. Stage 1 will verify the list of radioactive constituents of concern and generate the isotopic distribution. The objectives of Stage 2 are to estimate the radionuclide inventories of room debris, quantify chemical contamination, and survey room contents for potential salvage or recycle. Table 3-1 presents the sampling activities to be performed in Stage 1. Tables 1-17 and 1-18 identify samples to be collected in Stage 2. Stage 2 will consist primarily of survey data collection, with fixed laboratory samples to be collected in areas showing visible stains. Quality control sampling requirements are presented in Table 3-2

  11. Uncertainties in planned dose due to the limited voxel size of the planning CT when treating lung tumors with proton therapy

    International Nuclear Information System (INIS)

    Espana, Samuel; Paganetti, Harald

    2011-01-01

    Dose calculation for lung tumors can be challenging due to the low density and the fine structure of the geometry. The latter is not fully considered in the CT image resolution used in treatment planning causing the prediction of a more homogeneous tissue distribution. In proton therapy, this could result in predicting an unrealistically sharp distal dose falloff, i.e. an underestimation of the distal dose falloff degradation. The goal of this work was the quantification of such effects. Two computational phantoms resembling a two-dimensional heterogeneous random lung geometry and a swine lung were considered applying a variety of voxel sizes for dose calculation. Monte Carlo simulations were used to compare the dose distributions predicted with the voxel size typically used for the treatment planning procedure with those expected to be delivered using the finest resolution. The results show, for example, distal falloff position differences of up to 4 mm between planned and expected dose at the 90% level for the heterogeneous random lung (assuming treatment plan on a 2 x 2 x 2.5 mm 3 grid). For the swine lung, differences of up to 38 mm were seen when airways are present in the beam path when the treatment plan was done on a 0.8 x 0.8 x 2.4 mm 3 grid. The two-dimensional heterogeneous random lung phantom apparently does not describe the impact of the geometry adequately because of the lack of heterogeneities in the axial direction. The differences observed in the swine lung between planned and expected dose are presumably due to the poor axial resolution of the CT images used in clinical routine. In conclusion, when assigning margins for treatment planning for lung cancer, proton range uncertainties due to the heterogeneous lung geometry and CT image resolution need to be considered.

  12. Stability measures for rolling schedules with applications to capacity expansion planning, master production scheduling, and lot sizing

    OpenAIRE

    Kimms, Alf

    1996-01-01

    This contribution discusses the measurement of (in-)stability of finite horizon production planning when done on a rolling horizon basis. As examples we review strategic capacity expansion planning, tactical master production schedulng, and operational capacitated lot sizing.

  13. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  15. ASSESSMENT OF KNOWLEDGE REGARDING FAMILY PLANNING METHODS AND INTENDED FAMILY SIZE AMONG MEN OF URBAN SLUM

    Directory of Open Access Journals (Sweden)

    Anand Mohan Dixit

    2013-09-01

    Full Text Available Objective: To assess the knowledge of contraceptive methods and intended family size among the men of urban slum.Material and Method: Present study conducted in urban slum area of Jaipur. Information from 400 married men of age group 18-49 years collected on semi structured schedule during June to October 2012.House to house survey conducted to achieve defined sample size. Data were analyzed by using SPSS 12 soft ware. Chi square, t test and ANOVA were used for interpretation.Result and Conclusion: Most commonly known methods of family planning were female sterilization (95.2%, condom (94.7% and Male sterilization (93.5%.  IUCD (57% was still not popularly known method of contraception. Emergency contraceptive pills (12.2% and Injectables (25.7% were least known methods among men. Knowledge of different contraceptive differs according to educational status and caste of men.  TV and radio were main source of information. Only 16% men said that they got information from health personnel. On analysis present family size was 3.125 while desired family size was 2.63, it shows that two child norm is not ideal to all. Men who had already two children 53 % of them still want to expand their family. Approximately half of the men feel that they have larger family size and the main reasons were inappropriate knowledge (37% and ignorance (21%. Those men who want to expand their family size, son preference was the major reason. Only 3% men show the intention of one child as ideal in family, which indicate that one child norm is too far to reach.

  16. ASSESSMENT OF KNOWLEDGE REGARDING FAMILY PLANNING METHODS AND INTENDED FAMILY SIZE AMONG MEN OF URBAN SLUM

    Directory of Open Access Journals (Sweden)

    Anand Mohan Dixit

    2013-12-01

    Full Text Available Objective: To assess the knowledge of contraceptive methods and intended family size among the men of urban slum.Material and Method: Present study conducted in urban slum area of Jaipur. Information from 400 married men of age group 18-49 years collected on semi structured schedule during June to October 2012.House to house survey conducted to achieve defined sample size. Data were analyzed by using SPSS 12 soft ware. Chi square, t test and ANOVA were used for interpretation.Result and Conclusion: Most commonly known methods of family planning were female sterilization (95.2%, condom (94.7% and Male sterilization (93.5%.  IUCD (57% was still not popularly known method of contraception. Emergency contraceptive pills (12.2% and Injectables (25.7% were least known methods among men. Knowledge of different contraceptive differs according to educational status and caste of men.  TV and radio were main source of information. Only 16% men said that they got information from health personnel. On analysis present family size was 3.125 while desired family size was 2.63, it shows that two child norm is not ideal to all. Men who had already two children 53 % of them still want to expand their family. Approximately half of the men feel that they have larger family size and the main reasons were inappropriate knowledge (37% and ignorance (21%. Those men who want to expand their family size, son preference was the major reason. Only 3% men show the intention of one child as ideal in family, which indicate that one child norm is too far to reach.

  17. Tank 241-B-203 push mode core sampling and analysis plan. Revision 1

    International Nuclear Information System (INIS)

    Jo, J.

    1995-01-01

    This Sampling and Analysis Plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for two push-mode core samples from tank 241-B-203 (B-203)

  18. Tank 241-B-204 push mode core sampling and analysis plan. Revision 1

    International Nuclear Information System (INIS)

    Sasaki, L.M.

    1995-01-01

    This Sampling and Analysis Plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for two push-mode core samples from tank 241-B-204 (B-204)

  19. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  20. Impact of cone-beam computed tomography on implant planning and on prediction of implant size

    International Nuclear Information System (INIS)

    Pedroso, Ludmila Assuncao de Mello; Silva, Maria Alves Garcia Santos; Garcia, Robson Rodrigues; Leles, Jose Luiz Rodrigues; Leles, Claudio Rodrigues

    2013-01-01

    The aim was to investigate the impact of cone-beam computed tomography (CBCT) on implant planning and on prediction of final implant size. Consecutive patients referred for implant treatment were submitted to clinical examination, panoramic (PAN) radiography and a CBCT exam. Initial planning of implant length and width was assessed based on clinical and PAN exams, and final planning, on CBCT exam to complement diagnosis. The actual dimensions of the implants placed during surgery were compared with those obtained during initial and final planning, using the McNemmar test (p 0.05). It was concluded that CBCT improves the ability of predicting the actual implant length and reduces inaccuracy in surgical dental implant planning. (author)

  1. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  2. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  3. Test plan for K Basin Sludge Canister and Floor Sampling Device

    International Nuclear Information System (INIS)

    Meling, T.A.

    1995-01-01

    This document provides the test plan and procedure forms for conducting the functional and operational acceptance testing of the K Basin Sludge Canister and Floor Sampling Device(s). These samplers samples sludge off the floor of the 100K Basins and out of 100K fuel storage canisters

  4. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  5. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  6. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  7. Tank 241-TX-113 rotary mode core sampling and analysis plan

    International Nuclear Information System (INIS)

    McCain, D.J.

    1998-01-01

    This sampling and analysis plan (SAP) identities characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for push mode core samples from tank 241-TX-113 (TX-113). The Tank Characterization Technical Sampling Basis document identities Retrieval, Pretreatment and Immobilization as an issue that applies to tank TX-113. As a result, a 150 gram composite of solids shall be made and archived for that program. This tank is not on a Watch List

  8. Gas and liquid sampling for closed canisters in KW Basin - Work Plan

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1995-01-01

    Work Plan for the design and fabrication of gas/liquid sampler for closed canister sampling in KW Basin. This document defines the tasks associated with the design, fabrication, assembly, and acceptance testing equipment necessary for gas and liquid sampling of the Mark I and Mark II canisters in the K-West basin. The sampling of the gas space and the remaining liquid inside the closed canisters will be used to help understand any changes to the fuel elements and the canisters. Specifically, this work plan will define the scope of work and required task structure, list the technical requirements, describe design configuration control and verification methodologies, detail quality assurance requirements, and present a baseline estimate and schedule

  9. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  11. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  12. Tank 241-AY-101 Privatization Push Mode Core Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    TEMPLETON, A.M.

    2000-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AY-101. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AY-101 required to satisfy ''Data Quality Objectives For RPP Privatization Phase I: Confirm Tank T Is An Appropriate Feed Source For High-Level Waste Feed Batch X(HLW DQO)' (Nguyen 1999a), ''Data Quality Objectives For TWRS Privatization Phase I: Confirm Tank T Is An Appropriate Feed Source For Low-Activity Waste Feed Butch X (LAW DQO) (Nguyen 1999b)'', ''Low Activity Waste and High-Level Waste Feed Data Quality Objectives (L and H DQO)'' (Patello et al. 1999), and ''Characterization Data Needs for Development, Design, and Operation of Retrieval Equipment Developed through the Data Quality Objective Process (Equipment DQO)'' (Bloom 1996). Special instructions regarding support to the LAW and HLW DQOs are provided by Baldwin (1999). Push mode core samples will be obtained from risers 15G and 150 to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples; composite the liquids and solids; perform chemical analyses on composite and segment samples; archive half-segment samples; and provide sub-samples to the Process Chemistry Laboratory. The Process Chemistry Laboratory will prepare test plans and perform process tests to evaluate the behavior of the 241-AY-101 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. Requirements for analyses of samples originating in the process tests will be documented in the corresponding test plans and are not within the scope of this SAP

  13. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  14. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  15. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  16. UMTRA project water sampling and analysis plan, Falls City, Texas. Revision 1

    International Nuclear Information System (INIS)

    1995-09-01

    Planned, routine ground water sampling activities at the US Department of Energy (DOE) Uranium Mill Tailings Remedial Action (UMTRA) Project site near Falls City, Texas, are described in this water sampling and analysis plan (WSAP). The following plan identifies and justifies the sampling locations, analytical parameters, and sampling frequency for the routine monitoring stations at the site. The ground water data are used for site characterization and risk assessment. The regulatory basis for routine ground water monitoring at UMTRA Project sites is derived from the US Environmental Protection Agency (EPA) regulations in 40 CFR Part 192. Sampling procedures are guided by the UMTRA Project standard operating procedures (SOP) (JEG, n.d.), the Technical Approach Document (TAD) (DOE, 1989), and the most effective technical approach for the site. The Falls City site is in Karnes County, Texas, approximately 8 miles [13 kilometers southwest of the town of Falls City and 46 mi (74 km) southeast of San Antonio, Texas. Before surface remedial action, the tailings site consisted of two parcels. Parcel A consisted of the mill site, one mill building, five tailings piles, and one tailings pond south of Farm-to-Market (FM) Road 1344 and west of FM 791. A sixth tailings pile designated Parcel B was north of FM 791 and east of FM 1344

  17. Factors that influence planning for physical activity among workers in small- and medium-sized enterprises

    Directory of Open Access Journals (Sweden)

    Sawako Kawahara

    2018-06-01

    Full Text Available Physical activity (PA is necessary for improving the health of workers in small- to medium-sized enterprises (SMEs. However, behavioral changes conducive to PA are often difficult to achieve despite intentions. Because intention to perform PA does not always translate to action, proper planning may be critical for achieving PA. In this study, we aimed to identify factors related to planning for PA among workers in SMEs because this is one population that has been identified as being at higher risk for lifestyle-related diseases in Japan. Participants completed a series of validated questionnaires. Of 353 valid responses, 226 individuals (149 men; aged 47.5 ± 8.7 years stated their intention to perform PA. Multiple regression analysis indicated that a higher PA planning score was significantly associated with higher self-efficacy for PA (p < 0.001, higher risk perception regarding inactivity (p = 0.012, and greater knowledge of information about PA community services (p = 0.019. Therefore, we recommend that self-efficacy, risk perception, and information regarding PA community services are enhanced in the daily working lives of workers at their workplaces. In this manner, they can promote their planning of health behavioral changes in a supportive environment, drawing upon available services, supports, and other resources. Keywords: Workers, Planning, Intention, Physical activity, Small- and medium-sized enterprises

  18. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  19. Tank 241-AY-101 Privatization Push Mode Core Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    TEMPLETON, A.M.

    2000-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AY-101. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AY-101 required to satisfy Data Quality Objectives For RPP Privatization Phase I: Confirm Tank T Is An Appropriate Feed Source For High-Level Waste Feed Batch X(HLW DQO) (Nguyen 1999a), Data Quality Objectives For TWRS Privatization Phase I : Confirm Tank T Is An Appropriate Feed Source For Low-Activity Waste Feed Batch X (LAW DQO) (Nguyen 1999b), Low Activity Waste and High-Level Waste Feed Data Quality Objectives (L and H DQO) (Patello et al. 1999), and Characterization Data Needs for Development, Design, and Operation of Retrieval Equipment Developed through the Data Quality Objective Process (Equipment DQO) (Bloom 1996). Special instructions regarding support to the LAW and HLW DQOs are provided by Baldwin (1999). Push mode core samples will be obtained from risers 15G and 150 to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples; composite the liquids and solids; perform chemical analyses on composite and segment samples; archive half-segment samples; and provide subsamples to the Process Chemistry Laboratory. The Process Chemistry Laboratory will prepare test plans and perform process tests to evaluate the behavior of the 241-AY-101 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. Requirements for analyses of samples originating in the process tests will be documented in the corresponding test plans and are not within the scope of this SAP

  20. Chain sampling plan (ChSP-1) for desired acceptable quality level (AQL) and limiting quality level (LQL)

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2017-11-01

    Chain Sampling Plan is widely used whenever a small sample attributes plan is required to be used for situations involving destructive products coming out of continuous production process [1, 2]. This paper presents a procedure for the construction and selection of a ChSP-1 by attributes inspection based on membership functions [3]. A procedure using search technique is developed for obtaining the parameters of single sampling plan for a given set of AQL and LQL values. A sample of tables providing ChSP-1 plans for various combinations of AQL and LQL values are presented [4].

  1. Supplement to the UMTRA Project water sampling and analysis plan, Mexican Hat, Utah

    International Nuclear Information System (INIS)

    1995-09-01

    This water sampling and analysis plan (WSAP) supplement supports the regulatory and technical basis for water sampling at the Mexican Hat, Utah, Uranium Mill Tailings Remedial Action (UMTRA) Project site, as defined in the 1994 WSAP document for Mexican Hat (DOE, 1994). Further, the supplement serves to confirm our present understanding of the site relative to the hydrogeology and contaminant distribution as well as our intention to continue to use the sampling strategy as presented in the 1994 WSAP document for Mexican Hat. Ground water and surface water monitoring activities are derived from the US Environmental Protection Agency regulations in 40 CFR Part 192 (1991) and 60 FR 2854 (1995). Sampling procedures are guided by the UMTRA Project standard operating procedures (JEG, n.d.), the Technical Approach Document (DOE, 1989), and the most effective technical approach for the site. Additional site-specific documents relevant to the Mexican Hat site are the Mexican Hat Long-Term Surveillance Plan (currently in progress), and the Mexican Hat Site Observational Work Plan (currently in progress)

  2. The U.S. Geological Survey Geologic Collections Management System (GCMS)—A master catalog and collections management plan for U.S. Geological Survey geologic samples and sample collections

    Science.gov (United States)

    ,

    2015-01-01

    The U.S. Geological Survey (USGS) is widely recognized in the earth science community as possessing extensive collections of earth materials collected by research personnel over the course of its history. In 2006, a Geologic Collections Inventory was conducted within the USGS Geology Discipline to determine the extent and nature of its sample collections, and in 2008, a working group was convened by the USGS National Geologic and Geophysical Data Preservation Program to examine ways in which these collections could be coordinated, cataloged, and made available to researchers both inside and outside the USGS. The charge to this working group was to evaluate the proposition of creating a Geologic Collections Management System (GCMS), a centralized database that would (1) identify all existing USGS geologic collections, regardless of size, (2) create a virtual link among the collections, and (3) provide a way for scientists and other researchers to obtain access to the samples and data in which they are interested. Additionally, the group was instructed to develop criteria for evaluating current collections and to establish an operating plan and set of standard practices for handling, identifying, and managing future sample collections. Policies and procedures promoted by the GCMS would be based on extant best practices established by the National Science Foundation and the Smithsonian Institution. The resulting report—USGS Circular 1410, “The U.S. Geological Survey Geologic Collections Management System (GCMS): A Master Catalog and Collections Management Plan for U.S. Geological Survey Geologic Samples and Sample Collections”—has been developed for sample repositories to be a guide to establishing common practices in the collection, retention, and disposal of geologic research materials throughout the USGS.

  3. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory

    2012-05-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5. An important primitive of these methods is the local planner, which is used for validation of simple paths between two configurations. The most common is the straight-line local planner which interpolates along the straight line between the two configurations. In this paper, we introduce a new local planner, Toggle Local Planner (Toggle LP), which extends local planning to a two-dimensional subspace of the overall planning space. If no path exists between the two configurations in the subspace, then Toggle LP is guaranteed to correctly return false. Intuitively, more connections could be found by Toggle LP than by the straight-line planner, resulting in better connected roadmaps. As shown in our results, this is the case, and additionally, the extra cost, in terms of time or storage, for Toggle LP is minimal. Additionally, our experimental analysis of the planner shows the benefit for a wide array of robots, with DOF as high as 70. © 2012 IEEE.

  4. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    Science.gov (United States)

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  5. Gas liquid sampling for closed canisters in KW Basin - test plan

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1995-01-01

    Test procedures for the gas/liquid sampler. Characterization of the Spent Nuclear Fuel, SNF, sealed in canisters at KW-Basin is needed to determine the state of storing SNF wet. Samples of the liquid and the gas in the closed canisters will be taken to gain characterization information. Sampling equipment has been designed to retrieve gas and liquid from the closed canisters in KW basin. This plan is written to outline the test requirements for this developmental sampling equipment

  6. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  7. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  8. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  9. Failure-censored accelerated life test sampling plans for Weibull distribution under expected test time constraint

    International Nuclear Information System (INIS)

    Bai, D.S.; Chun, Y.R.; Kim, J.G.

    1995-01-01

    This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated

  10. Transuranic waste characterization sampling and analysis plan

    International Nuclear Information System (INIS)

    1994-01-01

    Los Alamos National Laboratory (the Laboratory) is located approximately 25 miles northwest of Santa Fe, New Mexico, situated on the Pajarito Plateau. Technical Area 54 (TA-54), one of the Laboratory's many technical areas, is a radioactive and hazardous waste management and disposal area located within the Laboratory's boundaries. The purpose of this transuranic waste characterization, sampling, and analysis plan (CSAP) is to provide a methodology for identifying, characterizing, and sampling approximately 25,000 containers of transuranic waste stored at Pads 1, 2, and 4, Dome 48, and the Fiberglass Reinforced Plywood Box Dome at TA-54, Area G, of the Laboratory. Transuranic waste currently stored at Area G was generated primarily from research and development activities, processing and recovery operations, and decontamination and decommissioning projects. This document was created to facilitate compliance with several regulatory requirements and program drivers that are relevant to waste management at the Laboratory, including concerns of the New Mexico Environment Department

  11. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  12. UMTRA project water sampling and analysis plan, Salt Lake City, Utah

    International Nuclear Information System (INIS)

    1994-06-01

    Surface remedial action was completed at the Salt Lake City, Utah, Uranium Mill Tailings Remedial Action (UMTRA) Project site in the fall of 1987. Results of water sampling for the years 1992 to 1994 indicate that site-related ground water contamination occurs in the shallow unconfined aquifer (the uppermost aquifer). With respect to background ground water quality, contaminated ground water in the shallow, unconfined aquifer has elevated levels of chloride, sodium, sulfate, total dissolved solids, and uranium. No contamination associated with the former tailings pile occurs in levels exceeding background in ground water in the deeper confined aquifer. This document provides the water sampling and analysis plan for ground water monitoring at the former uranium processing site in Salt Lake City, Utah (otherwise known as the ''Vitro'' site, named after the Vitro Chemical Company that operated the mill). All contaminated materials removed from the processing site were relocated and stabilized in a disposal cell near Clive, Utah, some 85 miles west of the Vitro site (known as the ''Clive'' disposal site). No ground water monitoring is being performed at the Clive disposal site, since concurrence of the remedial action plan by the US Nuclear Regulatory Commission and completion of the disposal cell occurred before the US Environmental Protection Agency issued draft ground water standards in 1987 (52 FR 36000) for cleanup, stabilization, and control of residual radioactive materials at the disposal site. In addition, the likelihood of post-closure impact on the ground water is minimal to nonexistent, due to the naturally poor quality of the ground water. Water sampling activities planned for calendar year 1994 consist of sampling ground water from nine monitor wells to assess the migration of contamination within the shallow unconfined aquifer and sampling ground water from two existing monitor wells to assess ground water quality in the confined aquifer

  13. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  14. Statistical sampling plan for the TRU waste assay facility

    International Nuclear Information System (INIS)

    Beauchamp, J.J.; Wright, T.; Schultz, F.J.; Haff, K.; Monroe, R.J.

    1983-08-01

    Due to limited space, there is a need to dispose appropriately of the Oak Ridge National Laboratory transuranic waste which is presently stored below ground in 55-gal (208-l) drums within weather-resistant structures. Waste containing less than 100 nCi/g transuranics can be removed from the present storage and be buried, while waste containing greater than 100 nCi/g transuranics must continue to be retrievably stored. To make the necessary measurements needed to determine the drums that can be buried, a transuranic Neutron Interrogation Assay System (NIAS) has been developed at Los Alamos National Laboratory and can make the needed measurements much faster than previous techniques which involved γ-ray spectroscopy. The previous techniques are reliable but time consuming. Therefore, a validation study has been planned to determine the ability of the NIAS to make adequate measurements. The validation of the NIAS will be based on a paired comparison of a sample of measurements made by the previous techniques and the NIAS. The purpose of this report is to describe the proposed sampling plan and the statistical analyses needed to validate the NIAS. 5 references, 4 figures, 5 tables

  15. Tank 241-AZ-102 Privatization Push Mode Core Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    RASMUSSEN, J.H.

    1999-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AZ-102. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AZ-102 required to satisfy the Data Quality Objectives For TWRS Privatization Phase I : Confirm Tank TIS An Appropriate Feed Source For High-Level Waste Feed Batch X(HLW DQO) (Nguyen 1999a), Data Quality Objectives For TWRS Privatization Phase 1: Confirm Tank TIS An Appropriate Feed Source For Low-Activity Waste Feed Batch X (LAW DQO) (Nguyen 1999b), Low Activity Waste and High Level Waste Feed Data Quality Objectives (L and H DQO) (Patello et al. 1999) and Characterization Data Needs for Development, Design, and Operation of Retrieval Equipment Developed through the Data Quality Objective Process (Equipment DQO) (Bloom 1996). The Tank Characterization Technical Sampling Basis document (Brown et al. 1998) indicates that these issues, except the Equipment DQO apply to tank 241-AZ-102 for this sampling event. The Equipment DQO is applied for shear strength measurements of the solids segments only. Poppiti (1999) requires additional americium-241 analyses of the sludge segments. Brown et al. (1998) also identify safety screening, regulatory issues and provision of samples to the Privatization Contractor(s) as applicable issues for this tank. However, these issues will not be addressed via this sampling event. Reynolds et al. (1999) concluded that information from previous sampling events was sufficient to satisfy the safety screening requirements for tank 241-AZ-102. Push mode core samples will be obtained from risers 15C and 24A to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples, composite the liquids and solids, perform chemical analyses

  16. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  17. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  18. Location and Size Planning of Distributed Photovoltaic Generation in Distribution network System Based on K-means Clustering Analysis

    Science.gov (United States)

    Lu, Siqi; Wang, Xiaorong; Wu, Junyong

    2018-01-01

    The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.

  19. Field Investigation Plan for 1301-N and 1325-N Facilities Sampling to Support Remedial Design

    International Nuclear Information System (INIS)

    Weiss, S. G.

    1998-01-01

    This field investigation plan (FIP) provides for the sampling and analysis activities supporting the remedial design planning for the planned removal action for the 1301-N and 1325-N Liquid Waste Disposal Facilities (LWDFs), which are treatment, storage,and disposal (TSD) units (cribs/trenches). The planned removal action involves excavation, transportation, and disposal of contaminated material at the Environmental Restoration Disposal Facility (ERDF).An engineering study (BHI 1997) was performed to develop and evaluate various options that are predominantly influenced by the volume of high- and low-activity contaminated soil requiring removal. The study recommended that additional sampling be performed to supplement historical data for use in the remedial design

  20. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  1. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  2. Comparison of CT based-CTV plan and CT based-ICRU38 plan in brachytherapy planning of uterine cervix cancer

    International Nuclear Information System (INIS)

    Shim, Jin Sup; Jo, Jung Kun; Si, Chang Keun; Lee, Ki Ho; Lee, Du Hyun; Choi, Kye Suk

    2004-01-01

    Although Improve of CT, MRI Radio-diagnosis and Radiation Therapy Planing, but we still use ICRU38 Planning system(2D film-based) broadly. 3-Dimensional ICR plan(CT image based) is not only offer tumor and normal tissue dose but also support DVH information. On this study, we plan irradiation-goal dose on CTV(CTV plan) and irradiation-goal dose on ICRU 38 point(ICRU38 plan) by use CT image. And compare with tumor-dose, rectal-dose, bladder-dose on both planning, and analysis DVH Sample 11 patients who treated by Ir-192 HDR. After 40 Gy external radiation therapy, ICR plan established. All the patients carry out CT-image scanned by CT-simulator. And we use PLATO(Nucletron) v.14.2 planing system. We draw CTV, rectum, bladder on the CT image. And establish plan irradiation- dose on CTV(CTV plan) and irradiation- dose on A-point(ICRU38 plan) CTV volume(average±SD) is 21.8±26.6 cm 3 , rectum volume(average±SD) is 60.9±25.0 cm 3 , bladder volume(average±SD) is 116.1±40.1cm 3 sampled 11 patients. The volume including 100% dose is 126.7±18.9 cm 3 on ICRU plan and 98.2±74.5 cm 3 on CTV plan. On ICRU planning, the other one's 22.0 cm 3 CTV volume who residual tumor size excess 4cm is not including 100% isodose. 8 patient's 12.9±5.9 cm 3 tumor volume who residual tumor size below 4 cm irradiated 100% dose. Bladder dose(recommended by ICRU 38) is 90.1±21.3 % on ICRU plan, 68.7±26.6% on CTV plan, and rectal dose is 86.4±18.3%, 76.9±15.6%. Bladder and Rectum maximum dose is 137.2±5.9%, 101.1±41.8% on ICRU plan, 107.6±47.9%, 86.9±30.8% on CTV plan. Therefore CTV plan more less normal issue-irradiated dose than ICRU plan. But one patient case who residual tumor size excess 4 cm, Normal tissue dose more higher than critical dose remarkably on CTV plan. 80% over-Irradiated rectal dose(V80rec) is 1.8±2.4 cm 3 on ICRU plan, 0.7±1.0 cm 3 on CTV plan. 80% over-Irradiated bladder dose(V80bla) is 12.2%±8.9 cm 3 on ICRU plan, 3.5±4.1 cm 3 on CTV plan. Likewise, CTV

  3. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  4. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  5. Impact of Spot Size and Beam-Shaping Devices on the Treatment Plan Quality for Pencil Beam Scanning Proton Therapy

    International Nuclear Information System (INIS)

    Moteabbed, Maryam; Yock, Torunn I.; Depauw, Nicolas; Madden, Thomas M.; Kooy, Hanne M.; Paganetti, Harald

    2016-01-01

    Purpose: This study aimed to assess the clinical impact of spot size and the addition of apertures and range compensators on the treatment quality of pencil beam scanning (PBS) proton therapy and to define when PBS could improve on passive scattering proton therapy (PSPT). Methods and Materials: The patient cohort included 14 pediatric patients treated with PSPT. Six PBS plans were created and optimized for each patient using 3 spot sizes (∼12-, 5.4-, and 2.5-mm median sigma at isocenter for 90- to 230-MeV range) and adding apertures and compensators to plans with the 2 larger spots. Conformity and homogeneity indices, dose-volume histogram parameters, equivalent uniform dose (EUD), normal tissue complication probability (NTCP), and integral dose were quantified and compared with the respective PSPT plans. Results: The results clearly indicated that PBS with the largest spots does not necessarily offer a dosimetric or clinical advantage over PSPT. With comparable target coverage, the mean dose (D_m_e_a_n) to healthy organs was on average 6.3% larger than PSPT when using this spot size. However, adding apertures to plans with large spots improved the treatment quality by decreasing the average D_m_e_a_n and EUD by up to 8.6% and 3.2% of the prescribed dose, respectively. Decreasing the spot size further improved all plans, lowering the average D_m_e_a_n and EUD by up to 11.6% and 10.9% compared with PSPT, respectively, and eliminated the need for beam-shaping devices. The NTCP decreased with spot size and addition of apertures, with maximum reduction of 5.4% relative to PSPT. Conclusions: The added benefit of using PBS strongly depends on the delivery configurations. Facilities limited to large spot sizes (>∼8 mm median sigma at isocenter) are recommended to use apertures to reduce treatment-related toxicities, at least for complex and/or small tumors.

  6. Impact of Spot Size and Beam-Shaping Devices on the Treatment Plan Quality for Pencil Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Moteabbed, Maryam, E-mail: mmoteabbed@partners.org; Yock, Torunn I.; Depauw, Nicolas; Madden, Thomas M.; Kooy, Hanne M.; Paganetti, Harald

    2016-05-01

    Purpose: This study aimed to assess the clinical impact of spot size and the addition of apertures and range compensators on the treatment quality of pencil beam scanning (PBS) proton therapy and to define when PBS could improve on passive scattering proton therapy (PSPT). Methods and Materials: The patient cohort included 14 pediatric patients treated with PSPT. Six PBS plans were created and optimized for each patient using 3 spot sizes (∼12-, 5.4-, and 2.5-mm median sigma at isocenter for 90- to 230-MeV range) and adding apertures and compensators to plans with the 2 larger spots. Conformity and homogeneity indices, dose-volume histogram parameters, equivalent uniform dose (EUD), normal tissue complication probability (NTCP), and integral dose were quantified and compared with the respective PSPT plans. Results: The results clearly indicated that PBS with the largest spots does not necessarily offer a dosimetric or clinical advantage over PSPT. With comparable target coverage, the mean dose (D{sub mean}) to healthy organs was on average 6.3% larger than PSPT when using this spot size. However, adding apertures to plans with large spots improved the treatment quality by decreasing the average D{sub mean} and EUD by up to 8.6% and 3.2% of the prescribed dose, respectively. Decreasing the spot size further improved all plans, lowering the average D{sub mean} and EUD by up to 11.6% and 10.9% compared with PSPT, respectively, and eliminated the need for beam-shaping devices. The NTCP decreased with spot size and addition of apertures, with maximum reduction of 5.4% relative to PSPT. Conclusions: The added benefit of using PBS strongly depends on the delivery configurations. Facilities limited to large spot sizes (>∼8 mm median sigma at isocenter) are recommended to use apertures to reduce treatment-related toxicities, at least for complex and/or small tumors.

  7. Compatibility Grab Sampling and Analysis Plan for FY 2000

    International Nuclear Information System (INIS)

    SASAKI, L.M.

    1999-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for grab samples obtained to address waste compatibility. It is written in accordance with requirements identified in Data Quality Objectives for Tank Farms Waste Compatibility Program (Mulkey et al. 1999) and Tank Farm Waste Transfer Compatibility Program (Fowler 1999). In addition to analyses to support Compatibility, the Waste Feed Delivery program has requested that tank samples obtained for Compatibility also be analyzed to confirm the high-level waste and/or low-activity waste envelope(s) for the tank waste (Baldwin 1999). The analytical requirements to confirm waste envelopes are identified in Data Quality Objectives for TWRS Privatization Phase I: Confirm Tank T is an Appropriate Feed Source for Low-Activity Waste Feed Batch X (Nguyen 1999a) and Data Quality Objectives for RPP Privatization Phase I: Confirm Tank T is an Appropriate Feed Source for High-Level Waste Feed Batch X (Nguyen 1999b)

  8. Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.

    Science.gov (United States)

    Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo

    2014-08-01

    Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  10. Planning Related to the Curation and Processing of Returned Martian Samples

    Science.gov (United States)

    McCubbin, F. M.; Harrington, A. D.

    2018-04-01

    Many of the planning activities in the NASA Astromaterials Acquisition and Curation Office at JSC are centered around Mars Sample Return. The importance of contamination knowledge and the benefits of a mobile/modular receiving facility are discussed.

  11. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  12. Planning for the Collection and Analysis of Samples of Martian Granular Materials Potentially to be Returned by Mars Sample Return

    Science.gov (United States)

    Carrier, B. L.; Beaty, D. W.

    2017-12-01

    NASA's Mars 2020 rover is scheduled to land on Mars in 2021 and will be equipped with a sampling system capable of collecting rock cores, as well as a specialized drill bit for collecting unconsolidated granular material. A key mission objective is to collect a set of samples that have enough scientific merit to justify returning to Earth. In the case of granular materials, we would like to catalyze community discussion on what we would do with these samples if they arrived in our laboratories, as input to decision-making related to sampling the regolith. Numerous scientific objectives have been identified which could be achieved or significantly advanced via the analysis of martian rocks, "regolith," and gas samples. The term "regolith" has more than one definition, including one that is general and one that is much more specific. For the purpose of this analysis we use the term "granular materials" to encompass the most general meaning and restrict "regolith" to a subset of that. Our working taxonomy includes the following: 1) globally sourced airfall dust (dust); 2) saltation-sized particles (sand); 3) locally sourced decomposed rock (regolith); 4) crater ejecta (ejecta); and, 5) other. Analysis of martian granular materials could serve to advance our understanding areas including habitability and astrobiology, surface-atmosphere interactions, chemistry, mineralogy, geology and environmental processes. Results of these analyses would also provide input into planning for future human exploration of Mars, elucidating possible health and mechanical hazards caused by the martian surface material, as well as providing valuable information regarding available resources for ISRU and civil engineering purposes. Results would also be relevant to matters of planetary protection and ground-truthing orbital observations. We will present a preliminary analysis of the following, in order to generate community discussion and feedback on all issues relating to: What are the

  13. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  14. Impact of spot size on plan quality of spot scanning proton radiosurgery for peripheral brain lesions

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dongxu, E-mail: dongxu-wang@uiowa.edu; Dirksen, Blake; Hyer, Daniel E.; Buatti, John M.; Sheybani, Arshin; Dinges, Eric; Felderman, Nicole; TenNapel, Mindi; Bayouth, John E.; Flynn, Ryan T. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa 52242 (United States)

    2014-12-15

    Purpose: To determine the plan quality of proton spot scanning (SS) radiosurgery as a function of spot size (in-air sigma) in comparison to x-ray radiosurgery for treating peripheral brain lesions. Methods: Single-field optimized (SFO) proton SS plans with sigma ranging from 1 to 8 mm, cone-based x-ray radiosurgery (Cone), and x-ray volumetric modulated arc therapy (VMAT) plans were generated for 11 patients. Plans were evaluated using secondary cancer risk and brain necrosis normal tissue complication probability (NTCP). Results: For all patients, secondary cancer is a negligible risk compared to brain necrosis NTCP. Secondary cancer risk was lower in proton SS plans than in photon plans regardless of spot size (p = 0.001). Brain necrosis NTCP increased monotonically from an average of 2.34/100 (range 0.42/100–4.49/100) to 6.05/100 (range 1.38/100–11.6/100) as sigma increased from 1 to 8 mm, compared to the average of 6.01/100 (range 0.82/100–11.5/100) for Cone and 5.22/100 (range 1.37/100–8.00/100) for VMAT. An in-air sigma less than 4.3 mm was required for proton SS plans to reduce NTCP over photon techniques for the cohort of patients studied with statistical significance (p = 0.0186). Proton SS plans with in-air sigma larger than 7.1 mm had significantly greater brain necrosis NTCP than photon techniques (p = 0.0322). Conclusions: For treating peripheral brain lesions—where proton therapy would be expected to have the greatest depth-dose advantage over photon therapy—the lateral penumbra strongly impacts the SS plan quality relative to photon techniques: proton beamlet sigma at patient surface must be small (<7.1 mm for three-beam single-field optimized SS plans) in order to achieve comparable or smaller brain necrosis NTCP relative to photon radiosurgery techniques. Achieving such small in-air sigma values at low energy (<70 MeV) is a major technological challenge in commercially available proton therapy systems.

  15. Impact of spot size on plan quality of spot scanning proton radiosurgery for peripheral brain lesions

    International Nuclear Information System (INIS)

    Wang, Dongxu; Dirksen, Blake; Hyer, Daniel E.; Buatti, John M.; Sheybani, Arshin; Dinges, Eric; Felderman, Nicole; TenNapel, Mindi; Bayouth, John E.; Flynn, Ryan T.

    2014-01-01

    Purpose: To determine the plan quality of proton spot scanning (SS) radiosurgery as a function of spot size (in-air sigma) in comparison to x-ray radiosurgery for treating peripheral brain lesions. Methods: Single-field optimized (SFO) proton SS plans with sigma ranging from 1 to 8 mm, cone-based x-ray radiosurgery (Cone), and x-ray volumetric modulated arc therapy (VMAT) plans were generated for 11 patients. Plans were evaluated using secondary cancer risk and brain necrosis normal tissue complication probability (NTCP). Results: For all patients, secondary cancer is a negligible risk compared to brain necrosis NTCP. Secondary cancer risk was lower in proton SS plans than in photon plans regardless of spot size (p = 0.001). Brain necrosis NTCP increased monotonically from an average of 2.34/100 (range 0.42/100–4.49/100) to 6.05/100 (range 1.38/100–11.6/100) as sigma increased from 1 to 8 mm, compared to the average of 6.01/100 (range 0.82/100–11.5/100) for Cone and 5.22/100 (range 1.37/100–8.00/100) for VMAT. An in-air sigma less than 4.3 mm was required for proton SS plans to reduce NTCP over photon techniques for the cohort of patients studied with statistical significance (p = 0.0186). Proton SS plans with in-air sigma larger than 7.1 mm had significantly greater brain necrosis NTCP than photon techniques (p = 0.0322). Conclusions: For treating peripheral brain lesions—where proton therapy would be expected to have the greatest depth-dose advantage over photon therapy—the lateral penumbra strongly impacts the SS plan quality relative to photon techniques: proton beamlet sigma at patient surface must be small (<7.1 mm for three-beam single-field optimized SS plans) in order to achieve comparable or smaller brain necrosis NTCP relative to photon radiosurgery techniques. Achieving such small in-air sigma values at low energy (<70 MeV) is a major technological challenge in commercially available proton therapy systems

  16. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  17. Supplement to the UMTRA Project water sampling and analysis plan, Maybell, Colorado

    International Nuclear Information System (INIS)

    1995-09-01

    This water sampling and analysis plan (WSAP) supplement supports the regulatory and technical basis for water sampling at the Maybell, Colorado, Uranium Mill Tailings Remedial Action (UMTRA) Project site, as defined in the 1994 WSAP document for Maybell (DOE, 1994a). Further, this supplement serves to confirm our present understanding of the site relative to the hydrogeology and contaminant distribution as well as our intention to continue to use the sampling strategy as presented in the 1994 WSAP document for Maybell. Ground water and surface water monitoring activities are derived from the US Environmental Protection Agency regulations in 40 CFR Part 192 (1994) and 60 CFR 2854 (1 995). Sampling procedures are guided by the UMTRA Project standard operating procedures (JEG, n.d.), the Technical Approach Document (DOE, 1989), and the most effective technical approach for the site. Additional site-specific documents relevant to the Maybell site are the Maybell Baseline Risk Assessment (currently in progress), the Maybell Remedial Action Plan (RAP) (DOE, 1994b), and the Maybell Environmental Assessment (DOE, 1995)

  18. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  19. A Social Media Marketing Plan for a Medium-sized Consumer Goods Company

    OpenAIRE

    Okolie, Emeka

    2013-01-01

    The objective of this study is to develop a social media marketing plan for the case com-pany to integrate it into its existing marketing communications. The case company of this study is a medium-sized consumer goods producing company that advertises its brand and products using traditional methods of advertising (radio, television, flyers and event promotion). At the moment, these methods seem to be lacking in efficiency and effective-ness caused by the saturation of marketing information w...

  20. Mechanism for Particle Transport and Size Sorting via Low-Frequency Vibrations

    Science.gov (United States)

    Sherrit, Stewart; Scott, James S.; Bar-Cohen, Yoseph; Badescu, Mircea; Bao, Xiaoqi

    2010-01-01

    There is a need for effective sample handling tools to deliver and sort particles for analytical instruments that are planned for use in future NASA missions. Specifically, a need exists for a compact mechanism that allows transporting and sieving particle sizes of powdered cuttings and soil grains that may be acquired by sampling tools such as a robotic scoop or drill. The required tool needs to be low mass and compact to operate from such platforms as a lander or rover. This technology also would be applicable to sample handling when transporting samples to analyzers and sorting particles by size.

  1. NOAA ESRI Grid - sediment size predictions model in New York offshore planning area from Biogeography Branch

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents sediment size predictions from a sediment spatial model developed for the New York offshore spatial planning area. The model also includes...

  2. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  3. The Viking X ray fluorescence experiment - Sampling strategies and laboratory simulations. [Mars soil sampling

    Science.gov (United States)

    Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.

    1977-01-01

    Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.

  4. Engineering Task Plan to Expand the Environmental Operational Envelope of Core Sampling

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This Engineering Task Plan authorizes the development of an Alternative Generation and Analysis (AGA). The AGA will determine how to expand the environmental operating envelope during core sampling operations

  5. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Probability Sampling - A Guideline for Quantitative Health Care ...

    African Journals Online (AJOL)

    A more direct definition is the method used for selecting a given ... description of the chosen population, the sampling procedure giving ... target population, precision, and stratification. The ... survey estimates, it is recommended that researchers first analyze a .... The optimum sample size has a relation to the type of planned ...

  7. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  8. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  9. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  10. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  11. Sampling and systematic error in a burrow index to measure relative population size in the common vole

    Czech Academy of Sciences Publication Activity Database

    Lisická, L.; Heroldová, Marta; Losík, J.; Tkadlec, Emil

    -, supp. (2006), s. 81 ISSN 1825-5272. [Rodens & Spatium /10./. 24.07.2006-28.07.2006, Parma] Institutional research plan: CEZ:AV0Z60930519 Keywords : common vole * population size Subject RIV: EH - Ecology, Behaviour

  12. Chemical Hygiene Plan for Onsite Measurement and Sample Shipping Facility Activities

    International Nuclear Information System (INIS)

    Price, W.H.

    1998-01-01

    This chemical hygiene plan presents the requirements established to ensure the protection of employee health while performing work in mobile laboratories, the sample shipping facility, and at the onsite radiological counting facility. This document presents the measures to be taken to promote safe work practices and to minimize worker exposure to hazardous chemicals. Specific hazardous chemicals present in the mobile laboratories, the sample shipping facility, and in the radiological counting facility are presented in Appendices A through G

  13. UMTRA project water sampling and analysis plan -- Shiprock, New Mexico

    International Nuclear Information System (INIS)

    1994-02-01

    Water sampling and analysis plan (WSAP) is required for each U.S. Department of Energy (DOE) Uranium Mill Tailings Remedial Action (UMTRA) Project site to provide a basis for ground water and surface water sampling at disposal and former processing sites. This WSAP identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the monitoring stations at the Navaho Reservation in Shiprock, New Mexico, UMTRA Project site. The purposes of the water sampling at Shiprock for fiscal year (FY) 1994 are to (1) collect water quality data at new monitoring locations in order to build a defensible statistical data base, (2) monitor plume movement on the terrace and floodplain, and (3) monitor the impact of alluvial ground water discharge into the San Juan River. The third activity is important because the community of Shiprock withdraws water from the San Juan River directly across from the contaminated alluvial floodplain below the abandoned uranium mill tailings processing site

  14. Supplement to the UMTRA Project water sampling and analysis plan, Monument Valley, Arizona

    International Nuclear Information System (INIS)

    1995-09-01

    This water sampling and analysis plan (WSAP) supplement supports the regulatory and technical basis for water sampling at the Riverton, Wyoming, Uranium Mill Tailings Remedial Action (UMTRA) Project site, as defined in the 1994 WSAP document for Riverton (DOE, 1994). Further, the supplement serves to confirm the Project's present understanding of the site relative to the hydrogeology and contaminant distribution as well as the intent to continue to use the sampling strategy as presented in the 1994 WSAP document for Riverton. Ground water and surface water monitoring activities are derived from the US Environmental Protection Agency regulations in 40 CFR Part 192 and 60 FR 2854. Sampling procedures are guided by the UMTRA Project standard operating procedures (JEG, n.d.), the Technical Approach Document (DOE, 1989), and the most effective technical approach for the site. Additional site-specific documents relevant to the Riverton site are the Riverton Baseline Risk Assessment (BLRA) (DOE, 1995a) and the Riverton Site Observational Work Plan (SOWP) (DOE, 1995b)

  15. C-018H Pre-Operational Baseline Sampling Plan

    International Nuclear Information System (INIS)

    Guzek, S.J.

    1993-01-01

    The objective of this task is to field characterize and sample the soil at selected locations along the proposed effluent line routes for Project C-018H. The overall purpose of this effort is to meet the proposed plan to discontinue the disposal of contaminated liquids into the Hanford soil column as described by DOE (1987). Detailed information describing proposed transport pipeline route and associated Kaiser Engineers Hanford Company (KEH) preliminary drawings (H288746...755) all inclusive, have been prepared by KEH (1992). The information developed from field monitoring and sampling will be utilized to characterize surface and subsurface soil along the proposed C-018H effluent pipeline and it's associated facilities. Potentially existing contaminant levels may be encountered therefore, soil characterization will provide a construction preoperational baseline reference, develop personnel safety requirements, and determine the need for any changes in the proposed routes prior to construction of the pipeline

  16. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  17. 10 CFR Appendix A to Subpart U of... - Sampling Plan for Enforcement Testing of Electric Motors

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Sampling Plan for Enforcement Testing of Electric Motors A Appendix A to Subpart U of Part 431 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY... to Subpart U of Part 431—Sampling Plan for Enforcement Testing of Electric Motors Step 1. The first...

  18. Double-Shell Tank (DST) Ventilation System Vapor Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    SASAKI, L.M.

    2000-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for vapor samples from the primary ventilation systems of the AN, AP, AW, and AY/AZ tank farms. Sampling will be performed in accordance with Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis (Air DQO) (Mulkey 1999). The sampling will verify if current air emission estimates used in the permit application are correct and provide information for future air permit applications. Vapor samples will be obtained from tank farm ventilation systems, downstream from the tanks and upstream of any filtration. Samples taken in support of the DQO will consist of SUMMA(trademark) canisters, triple sorbent traps (TSTs), sorbent tube trains (STTs), polyurethane foam (PUF) samples. Particulate filter samples and tritium traps will be taken for radiation screening to allow the release of the samples for analysis. The following sections provide the general methodology and procedures to be used in the preparation, retrieval, transport, analysis, and reporting of results from the vapor samples

  19. Virtual planning in orthognathic surgery

    DEFF Research Database (Denmark)

    Stokbro, K.; Aagaard, E.; Torkov, P.

    2014-01-01

    and accuracy of three-dimensional (3D) virtual surgical planning of orthognathic procedures compared with the actual surgical outcome following orthognathic surgery reported in clinical trials. A systematic search of the current literature was conducted to identify clinical trials with a sample size of more...

  20. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  1. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  2. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  3. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  4. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  5. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  6. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  7. Aggregate Production Planning, Casestudy in a Medium-sized Industry of the Rubber Production Line in Ecuador.

    Science.gov (United States)

    Rosero-Mantilla, César; Sánchez-Sailema, Mayra; Sánchez-Rosero, Carlos; Galleguillos-Pozo, Rosa

    2017-06-01

    This research aims to improve the productivity in the rubber line of a medium-sized industry by increasing the production capacities through the use of the Aggregate Production Planning model. For this purpose an analysis of the production processes of the line was made and the aggregate plan was defined evaluating two strategies: Exact Production Plan (Zero Inventory) and Constant Workforce Plan (Vary Inventory) by studying the costs of both inventory maintenance and workforce. It was also determined how the installed capacity was used with the standards of the rubber line and measures for decreasing production costs were proposed. It was proven that only 70% of the plant capacity was being used so it could be possible to produce more units and to obtain a bigger market for the products of this line.+

  8. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    Science.gov (United States)

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.

  9. Compatibility grab sampling and analysis plan for fiscal year 1999

    International Nuclear Information System (INIS)

    SASAKI, L.M.

    1999-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for grab samples obtained to address waste compatibility. Analytical requirements are taken from two revisions of the Compatibility data quality objectives (DQOs). Revision 1 of the DQO (Fowler 1995) listed analyses to be performed to meet both safety and operational data needs for the Compatibility program. Revision 2A of the DQO (Mulkey and Miller 1998) addresses only the safety-related requirements; the operational requirements of Fowler (1995) have not been superseded by Mulkey and Miller (1998). Therefore, safety-related data needs are taken from Mulkey and Miller (1998) and operational-related data needs are taken from Fowler (1995). Ammonia and total alpha analyses are also performed in accordance with Fowler (1998a, 1998b)

  10. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  11. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  12. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  13. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  14. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  15. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    Science.gov (United States)

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  16. 40 CFR Appendix Xi to Part 86 - Sampling Plans for Selective Enforcement Auditing of Light-Duty Vehicles

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Sampling Plans for Selective Enforcement Auditing of Light-Duty Vehicles XI Appendix XI to Part 86 Protection of Environment ENVIRONMENTAL... Enforcement Auditing of Light-Duty Vehicles 40% AQL Table 1—Sampling Plan Code Letter Annual sales of...

  17. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  18. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  19. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  20. Global Threat Reduction Initiative Fuel Thermo-Physical Characterization Project: Sample Management Plan

    Energy Technology Data Exchange (ETDEWEB)

    Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Pereira, Mario M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Steen, Franciska H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-01-01

    This sample management plan provides guidelines for sectioning, preparation, acceptance criteria, analytical path, and end-of-life disposal for the fuel element segments utilized in the Global Threat Reduction Initiative (GTRI), Fuel Thermo-Physical Characterization Project. The Fuel Thermo-Physical Characterization Project is tasked with analysis of irradiated Low Enriched Uranium (LEU) Molybdenum (U-Mo) fuel element samples to support the GTRI conversion program. Sample analysis may include optical microscopy (OM), scanning electron microscopy (SEM) fuel-surface interface analysis, gas pycnometry (density) measurements, laser flash analysis (LFA), differential scanning calorimetry (DSC), thermogravimetry and differential thermal analysis with mass spectroscopy (TG /DTA-MS), Inductively Coupled Plasma Spectrophotometry (ICP), alpha spectroscopy, and Thermal Ionization Mass Spectroscopy (TIMS). The project will utilize existing Radiochemical Processing Laboratory (RPL) operating, technical, and administrative procedures for sample receipt, processing, and analyses. Test instructions (TIs), which are documents used to provide specific details regarding the implementation of an existing RPL approved technical or operational procedure, will also be used to communicate to staff project specific parameters requested by the Principal Investigator (PI). TIs will be developed, reviewed, and issued in accordance with the latest revision of the RPL-PLN-700, RPL Operations Plan. Additionally, the PI must approve all project test instructions and red-line changes to test instructions.

  1. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  2. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  3. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  4. Tank 241-SY-101 push mode core sampling and analysis plan

    International Nuclear Information System (INIS)

    CONNER, J.M.

    1998-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for push mode core samples from tank 241-SY-101 (SY-101). It is written in accordance with Data Quality Objective to Support Resolution of the Flammable Gas Safety Issue (Bauer 1998), Low Activity Waste Feed Data Quality Objectives (Wiemers and Miller 1997 and DOE 1998), Data Quality Objectives for TWRS Privatization Phase I: Confirm Tank T is an Appropriate Feed Source for Low-Activity Waste Feed Batch X (Certa 1998), and the Tank Safety Screening Data Quality Objective (Dukelow et al. 1995). The Tank Characterization Technical Sampling Basis document (Brown et al. 1998) indicates that these issues apply to tank SY-101 for this sampling event. Brown et al. also identifies high-level waste, regulatory, pretreatment and disposal issues as applicable issues for this tank. However, these issues will not be addressed via this sampling event

  5. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  6. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  7. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Soil sampling and analysis plan for the 3718-F Alkali Metal Treatment and Storage Facility closure activities

    Energy Technology Data Exchange (ETDEWEB)

    Sonnichsen, J.C.

    1997-05-01

    Amendment V.13.B.b to the approved closure plan (DOE-RL 1995a) requires that a soil sampling and analysis plan be prepared and submitted to the Washington State Department of Ecology (Ecology) for review and approval. Amendment V.13.B.c requires that a diagram of the 3718-F Alkali Metal Treatment and Storage Facility unit (the treatment, storage, and disposal [TSD] unit) boundary that is to be closed, including the maximum extent of operation, be prepared and submitted as part is of the soil sampling and analysis plan. This document describes the sampling and analysis that is to be performed in response to these requirements and amends the closure plan. Specifically, this document supersedes Section 6.2, lines 43--46, and Section 7.3.6 of the closure plan. Results from the analysis will be compared to cleanup levels identified in the closure plan. These cleanup levels will be established using residential exposure assumptions in accordance with the Model Toxics Control Act (MTCA) Cleanup Regulation (Washington Administrative Code [WAC] 173-340) as required in Amendment V.13.B.I. Results of all sampling, including the raw analytical data, a summary of analytical results, a data validation package, and a narrative summary with conclusions will be provided to Ecology as specified in Amendment V.13.B.e. The results and process used to collect and analyze the soil samples will be certified by a licensed professional engineer. These results and a certificate of closure for the balance of the TSD unit, as outlined in Chapter 7.0 of the approved closure plan (storage shed, concrete pad, burn building, scrubber, and reaction tanks), will provide the basis for a closure determination.

  9. Soil sampling and analysis plan for the 3718-F Alkali Metal Treatment and Storage Facility closure activities

    International Nuclear Information System (INIS)

    Sonnichsen, J.C.

    1997-01-01

    Amendment V.13.B.b to the approved closure plan (DOE-RL 1995a) requires that a soil sampling and analysis plan be prepared and submitted to the Washington State Department of Ecology (Ecology) for review and approval. Amendment V.13.B.c requires that a diagram of the 3718-F Alkali Metal Treatment and Storage Facility unit (the treatment, storage, and disposal [TSD] unit) boundary that is to be closed, including the maximum extent of operation, be prepared and submitted as part is of the soil sampling and analysis plan. This document describes the sampling and analysis that is to be performed in response to these requirements and amends the closure plan. Specifically, this document supersedes Section 6.2, lines 43--46, and Section 7.3.6 of the closure plan. Results from the analysis will be compared to cleanup levels identified in the closure plan. These cleanup levels will be established using residential exposure assumptions in accordance with the Model Toxics Control Act (MTCA) Cleanup Regulation (Washington Administrative Code [WAC] 173-340) as required in Amendment V.13.B.I. Results of all sampling, including the raw analytical data, a summary of analytical results, a data validation package, and a narrative summary with conclusions will be provided to Ecology as specified in Amendment V.13.B.e. The results and process used to collect and analyze the soil samples will be certified by a licensed professional engineer. These results and a certificate of closure for the balance of the TSD unit, as outlined in Chapter 7.0 of the approved closure plan (storage shed, concrete pad, burn building, scrubber, and reaction tanks), will provide the basis for a closure determination

  10. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M [Mayo Clinic Arizona, Phoenix, AZ (United States); Schild, S; Wong, W [Mayo Clinic AZ, Phoenix, AZ (United States); Chang, J [MD Anderson Cancer Center, Houston, TX (United States); Liao, Z; Sahoo, N [UT MD Anderson Cancer Center, Houston, TX (United States); Herman, M [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan

  11. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    International Nuclear Information System (INIS)

    Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M; Schild, S; Wong, W; Chang, J; Liao, Z; Sahoo, N; Herman, M

    2016-01-01

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan

  12. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  13. Tank 241-AZ-101 Mixer Pump Test Vapor Sampling and Analysis Plan

    International Nuclear Information System (INIS)

    TEMPLETON, A.M.

    2000-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for vapor samples obtained during the operation of mixer pumps in tank 241-AZ-101. The primary purpose of the mixer pump test (MPT) is to demonstrate that the two 300 horsepower mixer pumps installed in tank 241-AZ-101 can mobilize the settled sludge so that it can be retrieved for treatment and vitrification. Sampling will be performed in accordance with Tank 241-AZ-101 Mixer Pump Test Data Quality Objective (Banning 1999) and Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis (Mulkey 1999). The sampling will verify if current air emission estimates used in the permit application are correct and provide information for future air permit applications

  14. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  15. Sampling and Analysis Plan Update for Groundwater Monitoring 1100-EM-1 Operable Unit

    International Nuclear Information System (INIS)

    DR Newcomer

    1999-01-01

    This document updates the sampling and analysis plan (Department of Energy/Richland Operations--95-50) to reflect current groundwater monitoring at the 1100-EM-1Operable Unit. Items requiring updating included sampling and analysis protocol, quality assurance and quality control, groundwater level measurement procedure, and data management. The plan covers groundwater monitoring, as specified in the 1993 Record of Decision, during the 5-year review period from 1995 through 1999. Following the 5-year review period, groundwater-monitoring data will be reviewed by Environmental Protection Agency to evaluate the progress of natural attenuation of trichloroethylene. Monitored natural attenuation and institutional controls for groundwater use at the inactive Horn Rapids Landfill was the selected remedy specified in the Record of Decision

  16. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  17. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  18. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  19. Savannah River Site sample and analysis plan for Clemson Technical Center waste

    International Nuclear Information System (INIS)

    Hagstrom, T.

    1998-04-01

    The purpose of this sampling and analysis plan is to determine the chemical, physical and radiological properties of the SRS radioactive Polychlorinated Biphenyl (PCB) liquid waste stream, to verify that it conforms to Waste Acceptance Criteria of the Department of Energy (DOE) East Tennessee Technology Park (ETTP) Toxic Substance Control Act (TSCA) Incineration Facility. Waste being sent to the ETTP TSCA Incinerator for treatment must be sufficiently characterized to ensure that the waste stream meets the waste acceptance criteria to ensure proper handling, classification, and processing of incoming waste to meet the Waste Storage and Treatment Facility's Operating Permits. This sampling and analysis plan is limited to WSRC container(s) of homogeneous or multiphasic radioactive PCB contaminated liquids generated in association with a treatability study at Clemson Technical Center (CTC) and currently stored at the WSRC Solid Waste Division Mixed Waste Storage Facility (MWSF)

  20. 241-Z-361 Sludge Characterization Sampling and Analysis Plan

    Energy Technology Data Exchange (ETDEWEB)

    BANNING, D.L.

    1999-08-05

    This sampling and analysis plan (SAP) identifies the type, quantity, and quality of data needed to support characterization of the sludge that remains in Tank 241-2-361. The procedures described in this SAP are based on the results of the 241-2-361 Sludge Characterization Data Quality Objectives (DQO) (BWHC 1999) process for the tank. The primary objectives of this project are to evaluate the contents of Tank 241-2-361 in order to resolve safety and safeguards issues and to assess alternatives for sludge removal and disposal.

  1. 241-Z-361 Sludge Characterization Sampling and Analysis Plan

    Energy Technology Data Exchange (ETDEWEB)

    BANNING, D.L.

    1999-07-29

    This sampling and analysis plan (SAP) identifies the type, quantity, and quality of data needed to support characterization of the sludge that remains in Tank 241-2-361. The procedures described in this SAP are based on the results of the 241-2-361 Sludge Characterization Data Quality Objectives (DQO) (BWHC 1999) process for the tank. The primary objectives of this project are to evaluate the contents of Tank 241-2-361 in order to resolve safety and safeguards issues and to assess alternatives for sludge removal and disposal.

  2. Sampling and analysis plan for the consolidated sludge samples from the canisters and floor of the 105-K East basin

    International Nuclear Information System (INIS)

    BAKER, R.B.

    1999-01-01

    This Sampling and Analysis Plan (SAP) provides direction for sampling of fuel canister and floor Sludge from the K East Basin to complete the inventory of samples needed for Sludge treatment process testing. Sample volumes and sources consider recent reviews made by the Sludge treatment subproject. The representative samples will be characterized to the extent needed for the material to be used effectively for testing. Sampling equipment used allows drawing of large volume sludge samples and consolidation of sample material from a number of basin locations into one container. Once filled, the containers will be placed in a cask and transported to Hanford laboratories for recovery and evaluation. Included in the present SAP are the logic for sample location selection, laboratory analysis procedures required, and reporting needed to meet the Data Quality Objectives (DQOs) for this initiative

  3. Tracer gas diffusion sampling test plan

    International Nuclear Information System (INIS)

    Rohay, V.J.

    1993-01-01

    Efforts are under way to employ active and passive vapor extraction to remove carbon tetrachloride from the soil in the 200 West Area an the Hanford Site as part of the 200 West Area Carbon Tetrachloride Expedited Response Action. In the active approach, a vacuum is applied to a well, which causes soil gas surrounding the well to be drawn up to the surface. The contaminated air is cleaned by passage through a granular activated carbon bed. There are questions concerning the radius of influence associated with application of the vacuum system and related uncertainties about the soil-gas diffusion rates with and without the vacuum system present. To address these questions, a series of tracer gas diffusion sampling tests is proposed in which an inert, nontoxic tracer gas, sulfur hexafluoride (SF 6 ), will be injected into a well, and the rates of SF 6 diffusion through the surrounding soil horizon will be measured by sampling in nearby wells. Tracer gas tests will be conducted at sites very near the active vacuum extraction system and also at sites beyond the radius of influence of the active vacuum system. In the passive vapor extraction approach, barometric pressure fluctuations cause soil gas to be drawn to the surface through the well. At the passive sites, the effects of barometric ''pumping'' due to changes in atmospheric pressure will be investigated. Application of tracer gas testing to both the active and passive vapor extraction methods is described in the wellfield enhancement work plan (Rohay and Cameron 1993)

  4. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  5. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  6. Engineering task plan for the development, fabrication and installation of rotary mode core sample truck bellows

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    The Rotary Mode Core Sampling Trucks (RMSCTs) currently use a multi-sectioned bellows between the grapple box and the quill rod to compensate for drill head motion and to provide a path for purge gas. The current bellows, which is detailed on drawing H-2-690059, is expensive to procure, has a lengthy procurement cycle, and is prone to failure. Therefore, a task has been identified to design, fabricate, and install a replacement bellows. This Engineering Task Plan (ETP) is the management plan document for accomplishing the identified tasks. Any changes in scope of the ETP shall require formal direction by the Characterization Engineering manager. This document shall also be considered the work planning document for developmental control per Development Control Requirements (HNF 1999a). This Engineering Task Plan (ETP) is the management plan document for accomplishing the design, fabrication, and installation of a replacement bellows assembly for the Rotary Mode Core Sampling Trucks 3 and 4 (RMCST)

  7. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  8. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  9. DOE responses to Ecology review comments for ''Sampling and analysis plans for the 100-D Ponds voluntary remediation project''

    International Nuclear Information System (INIS)

    1996-01-01

    The Sampling and Analysis Plan describes the sampling and analytical activities which will be performed to support closure of the 100-D Ponds at the Hanford Reservation. This report contains responses by the US Department of Energy to Ecology review for ''Sampling and Analysis Plan for the 100-D Ponds Voluntary Remediation Project.''

  10. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  11. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  12. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Impact of Spot Size and Spacing on the Quality of Robustly Optimized Intensity Modulated Proton Therapy Plans for Lung Cancer.

    Science.gov (United States)

    Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei

    2018-06-01

    To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large

  14. 2017 Annual Terrestrial Sampling Plan for Sandia National Laboratories/New Mexico on Kirtland Air Force Base

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, Stacy R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The 2017 Annual Terrestrial Sampling Plan for Sandia National Laboratories/New Mexico on Kirtland Air Force Base has been prepared in accordance with the “Letter of Agreement Between Department of Energy, National Nuclear Security Administration, Sandia Field Office (DOE/NNSA/SFO) and 377th Air Base Wing (ABW), Kirtland Air Force Base (KAFB) for Terrestrial Sampling” (signed January 2017), Sandia National Laboratories, New Mexico (SNL/NM). The Letter of Agreement requires submittal of an annual terrestrial sampling plan.

  15. 2018 Annual Terrestrial Sampling Plan for Sandia National Laboratories/New Mexico on Kirtland Air Force Base.

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, Stacy R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    The 2018 Annual Terrestrial Sampling Plan for Sandia National Laboratories/New Mexico on Kirtland Air Force Base has been prepared in accordance with the “Letter of Agreement Between Department of Energy, National Nuclear Security Administration, Sandia Field Office (DOE/NNSA/SFO) and 377th Air Base Wing (ABW), Kirtland Air Force Base (KAFB) for Terrestrial Sampling” (signed January 2017), Sandia National Laboratories, New Mexico (SNL/NM). The Letter of Agreement requires submittal of an annual terrestrial sampling plan.

  16. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  17. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    Science.gov (United States)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  18. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  19. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  20. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  1. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  3. Sampling and Analysis Plan for Verification Sampling of LANL-Derived Residual Radionuclides in Soils within Tract A-18-2 for Land Conveyance

    Energy Technology Data Exchange (ETDEWEB)

    Ruedig, Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-30

    Public Law 105-119 directs the U.S. Department of Energy (DOE) to convey or transfer parcels of land to the Incorporated County of Los Alamos or their designees and to the Department of Interior, Bureau of Indian Affairs, in trust for the Pueblo de San Ildefonso. Los Alamos National Security is tasked to support DOE in conveyance and/or transfer of identified land parcels no later than September 2022. Under DOE Order 458.1, Radiation Protection of the Public and the Environment (O458.1, 2013) and Los Alamos National Laboratory (LANL or the Laboratory) implementing Policy 412 (P412, 2014), real property with the potential to contain residual radioactive material must meet the criteria for clearance and release to the public. This Sampling and Analysis Plan (SAP) is a second investigation of Tract A-18-2 for the purpose of verifying the previous sampling results (LANL 2017). This sample plan requires 18 projectspecific soil samples for use in radiological clearance decisions consistent with LANL Procedure ENV-ES-TP-238 (2015a) and guidance in the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM, 2000). The sampling work will be conducted by LANL, and samples will be evaluated by a LANL-contracted independent lab. However, there will be federal review (verification) of all steps of the sampling process.

  4. Sampling and analysis plan for Mount Plant D ampersand D soils packages, Revision 1

    International Nuclear Information System (INIS)

    1991-02-01

    There are currently 682 containers of soils in storage at Mound Plant, generated between April 1 and October 31, 1990 as a result of excavation of soils containing plutonium-238 at two ongoing Decontamination and Decommissioning (D ampersand D) Program sites. These areas are known as Area 14, the waste transfer system (WTS) hillside, and Area 17, the Special Metallurgical (SM) Building area. The soils from these areas are part of Mound Plant waste stream number AMDM-000000010, Contaminated Soil, and are proposed for shipment to the Nevada Test Site (NTS) for disposal as low-level radioactive waste. The sealed waste packages, constructed of either wood or metal, are currently being stored in Building 31 and at other locations throughout the Mound facility. At a meeting in Las Vegas, Nevada on October, 26, 1990, DOE Nevada Operations Office (DOE-NV) and NTS representatives requested that the Mound Plant D ampersand D soils proposed for shipment to NTS be sampled for Toxicity Characteristic Leaching Procedure (TCLP) constituents. On December 14, 1990, DOE-NV also requested that additional analyses be performed on the soils from one of the soils boxes for polychlorinated biphenyls (PCBs), particle size distribution, and free liquids. The purpose of this plan is to document the proposed sampling and analyses of the packages of D ampersand D soils produced prior to October 31, 1990. In order to provide a thorough description of the soils excavated from the WTS and SM areas, sections 1.1 and 1.2 provide historical Information concerning the D ampersand D soils, including waste stream evaluations and past sampling data

  5. Comprehensive work plan and health and safety plan for the 7500 Area Contamination Site sampling at Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Burman, S.N.; Landguth, D.C.; Uziel, M.S.; Hatmaker, T.L.; Tiner, P.F.

    1992-05-01

    As part of the Environmental Restoration Program sponsored by the US Department of Energy's Office of Environmental Restoration and Waste Management, this plan has been developed for the environmental sampling efforts at the 7500 Area Contamination Site, Oak Ridge National Laboratory (ORNL), Oak Ridge, Tennessee. This plan was developed by the Measurement Applications and Development Group (MAD) of the Health and Safety Research Division of ORNL and will be implemented by ORNL/MAD. Major components of the plan include (1) a quality assurance project plan that describes the scope and objectives of ORNL/MAD activities at the 7500 Area Contamination Site, assigns responsibilities, and provides emergency information for contingencies that may arise during field operations; (2) sampling and analysis sections; (3) a site-specific health and safety section that describes general site hazards, hazards associated with specific tasks, personnel protection requirements, and mandatory safety procedures; (4) procedures and requirements for equipment decontamination and responsibilities for generated wastes, waste management, and contamination control; and (5) a discussion of form completion and reporting required to document activities at the 7500 Area Contamination Site

  6. Comprehensive work plan and health and safety plan for the 7500 Area Contamination Site sampling at Oak Ridge National Laboratory, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Burman, S.N.; Landguth, D.C.; Uziel, M.S.; Hatmaker, T.L.; Tiner, P.F.

    1992-05-01

    As part of the Environmental Restoration Program sponsored by the US Department of Energy's Office of Environmental Restoration and Waste Management, this plan has been developed for the environmental sampling efforts at the 7500 Area Contamination Site, Oak Ridge National Laboratory (ORNL), Oak Ridge, Tennessee. This plan was developed by the Measurement Applications and Development Group (MAD) of the Health and Safety Research Division of ORNL and will be implemented by ORNL/MAD. Major components of the plan include (1) a quality assurance project plan that describes the scope and objectives of ORNL/MAD activities at the 7500 Area Contamination Site, assigns responsibilities, and provides emergency information for contingencies that may arise during field operations; (2) sampling and analysis sections; (3) a site-specific health and safety section that describes general site hazards, hazards associated with specific tasks, personnel protection requirements, and mandatory safety procedures; (4) procedures and requirements for equipment decontamination and responsibilities for generated wastes, waste management, and contamination control; and (5) a discussion of form completion and reporting required to document activities at the 7500 Area Contamination Site.

  7. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  8. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  9. Tank 241-Z-361 vapor sampling and analysis plan

    Energy Technology Data Exchange (ETDEWEB)

    BANNING, D.L.

    1999-02-23

    Tank 241-Z-361 is identified in the Hanford Federal Facility Agreement and Consent Order (commonly referred to as the Tri-Party Agreement), Appendix C, (Ecology et al. 1994) as a unit to be remediated under the authority of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA). As such, the U.S. Environmental Protection Agency will serve as the lead regulatory agency for remediation of this tank under the CERCLA process. At the time this unit was identified as a CERCLA site under the Tri-Party Agreement, it was placed within the 200-ZP-2 Operable Unit. In 1997, The Tri-parties redefined 200 Area Operable Units into waste groupings (Waste Site Grouping for 200 Areas Soils Investigations [DOE-RL 1992 and 1997]). A waste group contains waste sites that share similarities in geological conditions, function, and types of waste received. Tank 241-Z-361 is identified within the CERCLA Plutonium/Organic-rich Process Condensate/Process Waste Group (DOE-RL 1992). The Plutonium/Organic-rich Process Condensate/Process Waste Group has been prioritized for remediation beginning in the year 2004. Results of Tank 216-Z-361 sampling and analysis described in this Sampling and Analysis Plan (SAP) and in the SAP for sludge sampling (to be developed) will determine whether expedited response actions are required before 2004 because of the hazards associated with tank contents. Should data conclude that remediation of this tank should occur earlier than is planned for the other sites in the waste group, it is likely that removal alternatives will be analyzed in a separate Engineering Evaluation/Cost Analysis (EE/CA). Removal actions would proceed after the U.S. Environmental Protection Agency (EPA) signs an Action Memorandum describing the selected removal alternative for Tank 216-Z-361. If the data conclude that there is no immediate threat to human health and the environment from this tank, remedial actions for the tank will be defined in a

  10. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  11. UMTRA project water sampling and analysis plan, Monument Valley, Arizona

    International Nuclear Information System (INIS)

    1994-04-01

    The Monument Valley Uranium Mill Tailings Remedial Action (UMTRA) Project site in Cane Valley is a former uranium mill that has undergone surface remediation in the form of tailings and contaminated materials removal. Contaminated materials from the Monument Valley (Arizona) UMTRA Project site have been transported to the Mexican Hat (Utah) UMTRA Project site for consolidation with the Mexican Hat tailings. Tailings removal was completed in February 1994. Three geologic units at the site contain water: the unconsolidated eolian and alluvial deposits (alluvial aquifer), the Shinarump Conglomerate (Shinarump Member), and the De Chelly Sandstone. Water quality analyses indicate the contaminant plume has migrated north of the site and is mainly in the alluvial aquifer. An upward hydraulic gradient in the De Chelly Sandstone provides some protection to that aquifer. This water sampling and analysis plan recommends sampling domestic wells, monitor wells, and surface water in April and September 1994. The purpose of sampling is to continue periodic monitoring for the surface program, evaluate changes to water quality for site characterization, and provide data for the baseline risk assessment. Samples taken in April will be representative of high ground water levels and samples taken in September will be representative of low ground water levels. Filtered and nonfiltered samples will be analyzed for plume indicator parameters and baseline risk assessment parameters

  12. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  13. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  14. Planning spatial sampling of the soil from an uncertain reconnaissance variogram

    Science.gov (United States)

    Lark, R. Murray; Hamilton, Elliott M.; Kaninga, Belinda; Maseka, Kakoma K.; Mutondo, Moola; Sakala, Godfrey M.; Watts, Michael J.

    2017-12-01

    An estimated variogram of a soil property can be used to support a rational choice of sampling intensity for geostatistical mapping. However, it is known that estimated variograms are subject to uncertainty. In this paper we address two practical questions. First, how can we make a robust decision on sampling intensity, given the uncertainty in the variogram? Second, what are the costs incurred in terms of oversampling because of uncertainty in the variogram model used to plan sampling? To achieve this we show how samples of the posterior distribution of variogram parameters, from a computational Bayesian analysis, can be used to characterize the effects of variogram parameter uncertainty on sampling decisions. We show how one can select a sample intensity so that a target value of the kriging variance is not exceeded with some specified probability. This will lead to oversampling, relative to the sampling intensity that would be specified if there were no uncertainty in the variogram parameters. One can estimate the magnitude of this oversampling by treating the tolerable grid spacing for the final sample as a random variable, given the target kriging variance and the posterior sample values. We illustrate these concepts with some data on total uranium content in a relatively sparse sample of soil from agricultural land near mine tailings in the Copperbelt Province of Zambia.

  15. UMTRA project water sampling and analysis plan, Ambrosia Lake, New Mexico

    International Nuclear Information System (INIS)

    1994-02-01

    This water sampling and analysis plan (WSAP) provides the basis for ground water sampling at the Ambrosia Lake Uranium Mill Tailings Remedial Action (UMTRA) Project site during fiscal year 1994. It identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the monitoring locations and will be updated annually. The Ambrosia Lake site is in McKinley County, New Mexico, about 40 kilometers (km) (25 miles [mi]) north of Grants, New Mexico, and 1.6 km (1 mi) east of New Mexico Highway 509 (Figure 1.1). The town closest to the tailings pile is San Mateo, about 16 km ( 10 mi) southeast (Figure 1.2). The former mill and tailings pile are in Section 28, and two holding ponds are in Section 33, Township 14 North, Range 9 West. The site is shown on the US Geological Survey (USGS) map (USGS, 1980). The site is approximately 2100 meters (m) (7000 feet [ft]) above sea level

  16. Microbiological sampling plan based on risk classification to verify supplier selection and production of served meals in food service operation.

    Science.gov (United States)

    Lahou, Evy; Jacxsens, Liesbeth; Van Landeghem, Filip; Uyttendaele, Mieke

    2014-08-01

    Food service operations are confronted with a diverse range of raw materials and served meals. The implementation of a microbial sampling plan in the framework of verification of suppliers and their own production process (functionality of their prerequisite and HACCP program), demands selection of food products and sampling frequencies. However, these are often selected without a well described scientifically underpinned sampling plan. Therefore, an approach on how to set-up a focused sampling plan, enabled by a microbial risk categorization of food products, for both incoming raw materials and meals served to the consumers is presented. The sampling plan was implemented as a case study during a one-year period in an institutional food service operation to test the feasibility of the chosen approach. This resulted in 123 samples of raw materials and 87 samples of meal servings (focused on high risk categorized food products) which were analyzed for spoilage bacteria, hygiene indicators and food borne pathogens. Although sampling plans are intrinsically limited in assessing the quality and safety of sampled foods, it was shown to be useful to reveal major non-compliances and opportunities to improve the food safety management system in place. Points of attention deduced in the case study were control of Listeria monocytogenes in raw meat spread and raw fish as well as overall microbial quality of served sandwiches and salads. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. TMI-2 accident evaluation program sample acquisition and examination plan. Executive summary

    International Nuclear Information System (INIS)

    Russell, M.L.; McCardell, R.K.; Broughton, J.M.

    1985-12-01

    The purpose of the TMI-2 Accident Evaluation Program Sample Acquisition and Examination (TMI-2 AEP SA and E) program is to develop and implement a test and inspection plan that completes the current-condition characterization of (a) the TMI-2 equipment that may have been damaged by the core damage events and (b) the TMI-2 core fission product inventory. The characterization program includes both sample acquisitions and examinations and in-situ measurements. Fission product characterization involves locating the fission products as well as determining their chemical form and determining material association

  18. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  19. In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size

    Directory of Open Access Journals (Sweden)

    Stefano Schiavon

    2010-01-01

    Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.

  20. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  1. A sample lesson plan for the course English Composition II

    Directory of Open Access Journals (Sweden)

    Córdoba Cubillo, Patricia

    2005-03-01

    Full Text Available The goal of this article is to present a lesson plan and a series of sample tasks to help the instructors from the course English Composition II, at the School of Modern Languages from the University of Costa Rica, to guide students write an essay integrating the four skills: listening, speaking, reading, and writing. These activities will be a source of comprehensible input for the learners that will hopefully result in a good writing piece. El objetivo de este artículo es presentar un plan de lección y una serie de actividades que le ayudarán a los y las instructoras del curso Composición Inglesa II de la Escuela de Lenguas Modernas de la Universidad de Costa Rica a guiar a sus estudiantes a escribir un ensayo integrando las cuatro macro-destrezas, a saber comprensión auditiva, conversación, lectura y escritura. Mediante estas actividades se espera que los estudiantes elaboren un ensayo de calidad.

  2. Tank 241-SX-105 rotary mode core sampling and analysis plan

    International Nuclear Information System (INIS)

    Simpson, B.C.

    1998-01-01

    This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for rotary mode core samples from tank 241-SX-105 (SX-105). It is written in accordance with Tank Safety Screening Data Quality Objective (Dukelow et al. 1995) and Memorandum of Understanding for the Organic Complexant Safety Issue Data Requirements (Schreiber 1997a). Vapor screening issues apply as well, but are outside the scope of this SAP. A physical profile prediction based on waste fill history and previous sampling information is provided in Appendix A. Prior to core sampling, the dome space (below the riser) shall be measured for the presence of flammable gases. The measurement shall be taken from within the dome space and the data reported as a percentage of the lower flammability limit (LFL). The results shall be transmitted to the tank coordinator within ten working days of the sampling event (Schreiber 1997b). If the results are above 25 percent of the LFL when analyzing by gas chromatography/mass spectrometry or gas-specific monitoring gauges or above 10% of the LFL when analyzing with a combustible gas meter, the necessity for recurring sampling for flammable gas concentration and the frequency of such sampling will be determined by the Flammable Gas Safety Project. Any additional vapor sampling is not within the scope of this SAP

  3. Sympathy for the Devil: Detailing the Effects of Planning-Unit Size, Thematic Resolution of Reef Classes, and Socioeconomic Costs on Spatial Priorities for Marine Conservation.

    Science.gov (United States)

    Cheok, Jessica; Pressey, Robert L; Weeks, Rebecca; Andréfouët, Serge; Moloney, James

    2016-01-01

    Spatial data characteristics have the potential to influence various aspects of prioritising biodiversity areas for systematic conservation planning. There has been some exploration of the combined effects of size of planning units and level of classification of physical environments on the pattern and extent of priority areas. However, these data characteristics have yet to be explicitly investigated in terms of their interaction with different socioeconomic cost data during the spatial prioritisation process. We quantify the individual and interacting effects of three factors-planning-unit size, thematic resolution of reef classes, and spatial variability of socioeconomic costs-on spatial priorities for marine conservation, in typical marine planning exercises that use reef classification maps as a proxy for biodiversity. We assess these factors by creating 20 unique prioritisation scenarios involving combinations of different levels of each factor. Because output data from these scenarios are analogous to ecological data, we applied ecological statistics to determine spatial similarities between reserve designs. All three factors influenced prioritisations to different extents, with cost variability having the largest influence, followed by planning-unit size and thematic resolution of reef classes. The effect of thematic resolution on spatial design depended on the variability of cost data used. In terms of incidental representation of conservation objectives derived from finer-resolution data, scenarios prioritised with uniform cost outperformed those prioritised with variable cost. Following our analyses, we make recommendations to help maximise the spatial and cost efficiency and potential effectiveness of future marine conservation plans in similar planning scenarios. We recommend that planners: employ the smallest planning-unit size practical; invest in data at the highest possible resolution; and, when planning across regional extents with the intention

  4. Sampling plan for the analysis of aflatoxin in peanuts and corn: an update Sistema de amostragem para análise de aflatoxinas em amendoim e milho: uma atualização

    Directory of Open Access Journals (Sweden)

    Homero Fonseca

    2002-06-01

    Full Text Available The aim of this paper was to update the sampling plan for analysis of mycotoxins in grains, formerly published by the author. The proposed alterations were based on the acquired experience on its application and on FAO recommendations. This update restricts the scope of the former plan and establishes a sampling plan for analysis of aflatoxin in peanuts and corn, by means of modified formulas, the minimum number of sacks or points (when in bulk from which incremental samples should be drawn to make a bulk sample. Fractional exponents (square roots of the formulas proportionally decrease the number of sacks/points to be sampled as the lot size increases. Operating Characteristic (OC curves developed for in-shell and shelled peanuts and corn as well as trend curves of the coefficient variation for different sample sizes (weights are presented.O objetivo deste trabalho foi atualizar a metodologia de amostragem para análise de micotoxinas em grãos, anteriormente publicada pelo autor. As alterações propostas tiveram por base a experiência adquirida na sua utilização e em recomendações da FAO. Esta atualização restringe a aplicação do método anterior e estabelece um plano de amostragem para análise de aflatoxina em amendoim e em milho por meio de fórmulas modificadas, o mínimo de sacos ou pontos (quando a granel dos quais devem ser retiradas amostras incrementais para constituir uma amostra. Expoentes fracionários (raiz quadrada das fórmulas diminuem proporcionalmente o número de sacos/pontos a serem amostrados, à medida que o lote aumenta de tamanho. Curvas de operação característica (OC desenvolvidas para amendoim em casca e descascado e milho, bem como curvas de tendência dos coeficientes de variação, para diferentes tamanhos de amostra (peso, são apresentadas.

  5. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  6. Mind over platter: pre-meal planning and the control of meal size in humans.

    Science.gov (United States)

    Brunstrom, J M

    2014-07-01

    It is widely accepted that meal size is governed by psychological and physiological processes that generate fullness towards the end of a meal. However, observations of natural eating behaviour suggest that this preoccupation with within-meal events may be misplaced and that the role of immediate post-ingestive feedback (for example, gastric stretch) has been overstated. This review considers the proposition that the locus of control is more likely to be expressed in decisions about portion size, before a meal begins. Consistent with this idea, we have discovered that people are extremely adept at estimating the 'expected satiety' and 'expected satiation' of different foods. These expectations are learned over time and they are highly correlated with the number of calories that end up on our plate. Indeed, across a range of foods, the large variation in expected satiety/satiation may be a more important determinant of meal size than relatively subtle differences in palatability. Building on related advances, it would also appear that memory for portion size has an important role in generating satiety after a meal has been consumed. Together, these findings expose the importance of planning and episodic memory in the control of appetite and food intake in humans.

  7. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  8. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.

    Science.gov (United States)

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-12-26

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  9. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach

    Directory of Open Access Journals (Sweden)

    Gabriele Ferri

    2015-12-01

    Full Text Available This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality, used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support. The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided

  10. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach

    Science.gov (United States)

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-01-01

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called Aη, is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  11. Sampling and Analysis Plan for N-Springs ERA pump-and-treat waste media

    International Nuclear Information System (INIS)

    Stankovich, M.T.

    1996-07-01

    This Sampling and Analysis Plan details the administrative procedures to be used to conduct sampling activities for characterization of spent ion-exchange resin, clinoptilolite, generated from the N-Springs pump-and-treat expedited response action. N-Springs (riverbank seeps) is located in the 100-N Area of the Hanford Site. Groundwater contained in the 100-NR-2 Operable Unit is contaminated with various radionuclides derived from wastewater disposal practices and spills associated with 100-N Reactor Operations

  12. THE PREMISES OF STRATEGIC MARKETING PLANNING IMPLEMENTATION WITHIN SMALL AND MEDIUM SIZED ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Popescu Andrei

    2011-07-01

    Full Text Available The main purpose of the present paper is to identify the framework and the necessary conditions for the small and medium sized enterprises (SMEs to be able to adopt the strategic marketing planning. Also, the paper aims to underline the importance of the strategic marketing planning and the manner in which the SMEs can adopt, implement and operationalize the strategic marketing planning instruments, whose correct understanding and usage ensure the capacity to generate competitive advantage, the key element both from the perspective of the fierce competition and the perspective of the future development of the SMEs. Within SMEs the implementation of marketing becomes an evident requirment, mostly due to the relationship that these have with the market, thus, leading towards market orientation of the activities, a new approach developed by the marketing vision on managing the activities from these types of organizations. Regarded upon, from the marketing perspective, the activities from the SMEs, especially the marketing activities, cannot take place randomly. Resource allocation, a characteristic of these types of organizations, and the objectives with regards to superior customer needs satisfaction and economic efficiency maximization, claim thorough plannification and deployment of the activities in a sequence that represents the implementation of a strategy previously assumed. Within this framework, the strategic marketing planning appears as a complex process employing all scientific instruments that comprise segmentation, positioning and marketing mix. Utilizing the strategic marketing planning within SMEs depends to further extend on marketing integration; process directly related with a series of factors such as the nature of the market, development stage, product type, management quality and the influences of the marketing department of the SME. The implications onto the marketing activities from SMEs are reflected upon each strategic

  13. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Science.gov (United States)

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately

  14. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  15. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  16. Surgical planning of total hip arthroplasty: accuracy of computer-assisted EndoMap software in predicting component size

    International Nuclear Information System (INIS)

    Davila, Jesse A.; Kransdorf, Mark J.; Duffy, Gavan P.

    2006-01-01

    The purpose of our study was to assess the accuracy of a computer-assisted templating in the surgical planning of patients undergoing total hip arthroplasty utilizing EndoMap software (Siemans AG, Medical Solutions, Erlangen, Germany). Endomap Software is an electronic program that uses DICOM images to analyze standard anteroposterior radiographs for determination of optimal prosthesis component size. We retrospectively reviewed the preoperative radiographs of 36 patients undergoing uncomplicated primary total hip arthroplasty, utilizing EndoMap software, Version VA20. DICOM anteroposterior radiographs were analyzed using standard manufacturer supplied electronic templates to determine acetabular and femoral component sizes. No additional clinical information was reviewed. Acetabular and femoral component sizes were assessed by an orthopedic surgeon and two radiologists. Mean and estimated component size was compared with component size as documented in operative reports. The mean estimated acetabular component size was 53 mm (range 48-60 mm), 1 mm larger than the mean implanted size of 52 mm (range 48-62 mm). Thirty-one of 36 acetabular component sizes (86%) were accurate within one size. The mean calculated femoral component size was 4 (range 2-7), 1 size smaller than the actual mean component size of 5 (range 2-9). Twenty-six of 36 femoral component sizes (72%) were accurate within one size, and accurate within two sizes in all but four cases (94%). EndoMap Software predicted femoral component size well, with 72% within one component size of that used, and 94% within two sizes. Acetabular component size was predicted slightly better with 86% within one component size and 94% within two component sizes. (orig.)

  17. The potential distributions, and estimated spatial requirements and population sizes, of the medium to large-sized mammals in the planning domain of the Greater Addo Elephant National Park project

    Directory of Open Access Journals (Sweden)

    A.F. Boshoff

    2002-12-01

    Full Text Available The Greater Addo Elephant National Park project (GAENP involves the establishment of a mega biodiversity reserve in the Eastern Cape, South Africa. Conservation planning in the GAENP planning domain requires systematic information on the potential distributions and estimated spatial requirements, and population sizes of the medium to largesized mammals. The potential distribution of each species is based on a combination of literature survey, a review of their ecological requirements, and consultation with conservation scientists and managers. Spatial requirements were estimated within 21 Mammal Habitat Classes derived from 43 Land Classes delineated by expert-based vegetation and river mapping procedures. These estimates were derived from spreadsheet models based on forage availability estimates and the metabolic requirements of the respective mammal species, and that incorporate modifications of the agriculture-based Large Stock Unit approach. The potential population size of each species was calculated by multiplying its density estimate with the area of suitable habitat. Population sizes were calculated for pristine, or near pristine, habitats alone, and then for these habitats together with potentially restorable habitats for two park planning domain scenarios. These data will enable (a the measurement of the effectiveness of the GAENP in achieving predetermined demographic, genetic and evolutionary targets for mammals that can potentially occur in selected park sizes and configurations, (b decisions regarding acquisition of additional land to achieve these targets to be informed, (c the identification of species for which targets can only be met through metapopulation management,(d park managers to be guided regarding the re-introduction of appropriate species, and (e the application of realistic stocking rates. Where possible, the model predictions were tested by comparison with empirical data, which in general corroborated the

  18. Planning Ahead: Influence of Figure Orientation on Size of Head in Children's Drawings of a Man.

    Science.gov (United States)

    Willatts, Peter; Dougal, Shonagh

    In an investigation of causes of the disproportionate relation between head and body in children's drawings of the human figure, 160 children of 3-10 years of age produced drawings of a man viewed from the front and from the back. It was expected that if planning to include facial features increased the size of the head children drew, then heads…

  19. Sampling and analysis plan (SAP) for WESF drains and TK-100 sump

    International Nuclear Information System (INIS)

    Simmons, F.M.

    1998-01-01

    The intent of this project is to determine whether the Waste Encapsulation and Storage Facility (WESF) floor drain piping and the TK-100 sump are free from contamination. TK-100 is currently used as a catch tank to transfer low level liquid waste from WESF to Tank Farms via B Plant. This system is being modified as part of the WESF decoupling since B Plant is being deactivated. As a result of the 1,1,1-trichloroethane (TCA) discovery in TK-100, the associated WESF floor drains and the pit sump need to be sampled. Breakdown constituents have been reviewed and found to be non-hazardous. There are 29 floor drains that tie into a common header leading into the tank. To prevent high exposure during sampling of the drains, TK-100 will be removed into the B Plant canyon and a new tank will be placed in the pit before any floor drain samples are taken. The sump will be sampled prior to TK-100 removal. A sample of the sludge and any liquid in the sump will be taken and analyzed for TCA and polychlorinated biphenyl (PCB). After the sump has been sampled, the vault floor will be flushed. The flush will be transferred from the sump into TK-100. TK-100 will be moved into B Plant. The vault will then be cleaned of debris and visually inspected. If there is no visual indication of TCA or PCB staining, the vault will be painted and a new tank installed. If there is an indication of TCA or PCB from laboratory analysis or staining, further negotiations will be required to determine a path forward. A total of 8 sets of three 40ml samples will be required for all of the floor drains and sump. The sump set will include one 125ml solid sample. The only analysis required will be for TCA in liquids. PCBs will be checked in sump solids only. The Sampling and Analysis Plan (SAP) is written to provide direction for the sampling and analytical activities of the 29 WESF floor drains and the TK-100 sump. The intent of this plan is to define the responsibilities of the various organizations

  20. Sampling and Analysis Plan for Tank 241-AP-108 Waste in Support of Evaporator Campaign 2000-1

    International Nuclear Information System (INIS)

    RASMUSSEN, J.H.

    2000-01-01

    This Tank Sampling and Analysis Plan (TSAP) identifies sample collection, laboratory analysis, quality assurance/quality control (QA/QC), and reporting objectives for the characterization of tank 241-AP-108 waste. Technical bases for these objectives are specified in the 242-A Evaporator Data Quality Objectives (Bowman 2000 and Von Bargen 1998) and 108-AP Tank Sampling Requirements in Support of Evaporator Campaign 2000-1 (Le 2000). Evaporator campaign 2000-1 will process waste from tank 241-AP-108 in addition to that from tank 241-AP-107. Characterization results will be used to support the evaporator campaign currently planned for early fiscal year 2000. No other needs (or issues) requiring data for this tank waste apply to this sampling event

  1. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  2. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  3. Sampling and analysis plan for assessment of beryllium in soils surrounding TA-40 building 15

    Energy Technology Data Exchange (ETDEWEB)

    Ruedig, Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-19

    Technical Area (TA) 40 Building 15 (40-15) is an active firing site at Los Alamos National Laboratory. The weapons facility operations (WFO) group plans to build an enclosure over the site in 2017, so that test shots may be conducted year-round. The enclosure project is described in PRID 16P-0209. 40-15 is listed on LANL OSH-ISH’s beryllium inventory, which reflects the potential for beryllium in/on soils and building surfaces at 40-15. Some areas in and around 40-15 have previously been sampled for beryllium, but past sampling efforts did not achieve complete spatial coverage of the area. This Sampling and Analysis Plan (SAP) investigates the area surrounding 40-15 via 9 deep (≥1-ft.) soil samples and 11 shallow (6-in.) soil samples. These samples will fill the spatial data gaps for beryllium at 40-15, and will be used to support OSH-ISH’s final determination of 40-15’s beryllium registry status. This SAP has been prepared by the Environmental Health Physics program in consultation with the Industrial Hygiene program. Industrial Hygiene is the owner of LANL’s beryllium program, and will make a final determination with regard to the regulatory status of beryllium at 40-15.

  4. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  5. Accuracy and Effort of Interpolation and Sampling: Can GIS Help Lower Field Costs?

    Directory of Open Access Journals (Sweden)

    Greg Simpson

    2014-12-01

    Full Text Available Sedimentation is a problem for all reservoirs in the Black Hills of South Dakota. Before working on sediment removal, a survey on the extent and distribution of the sediment is needed. Two sample lakes were used to determine which of three interpolation methods gave the most accurate volume results. A secondary goal was to see if fewer samples could be taken while still providing similar results. The smaller samples would mean less field time and thus lower costs. Subsamples of 50%, 33% and 25% were taken from the total samples and evaluated for the lowest Root Mean Squared Error values. Throughout the trials, the larger sample sizes generally showed better accuracy than smaller samples. Graphing the sediment volume estimates of the full sample, 50%, 33% and 25% showed little improvement after a sample of approximately 40%–50% when comparing the asymptote of the separate samples. When we used smaller subsamples the predicted sediment volumes were normally greater than the full sample volumes. It is suggested that when planning future sediment surveys, workers plan on gathering data at approximately every 5.21 meters. These sample sizes can be cut in half and still retain relative accuracy if time savings are needed. Volume estimates may slightly suffer with these reduced samples sizes, but the field work savings can be of benefit. Results from these surveys are used in prioritization of available funds for reclamation efforts.

  6. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  7. Sampling and Analysis Plan for Supplemental Environmental Project: Aquatic Life Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Berryhill, Jesse Tobias [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gaukler, Shannon Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    As part of a settlement agreement for nuclear waste incidents in 2014, several supplemental environment projects (SEPs) were initiated at Los Alamos National Laboratory (LANL or the Laboratory) between the U.S. Department of Energy and the state of New Mexico. One SEP from this agreement consists of performing aquatic life surveys and will be used to assess the applicability of using generic ambient water-quality criteria (AWQC) for aquatic life. AWQC are generic criteria developed by the U.S. Environmental Protection Agency (EPA) to cover a broad range of aquatic species and are not unique to a specific region or state. AWQC are established by a composition of toxicity data, called species sensitivity distributions (SSDs), and are determined by LC50 (lethal concentration of 50% of the organisms studied) acute toxicity experiments for chemicals of interest. It is of interest to determine whether aquatic species inhabiting waters on the Pajarito Plateau are adequately protected using the current generic AWQC. The focus of this study will determine which aquatic species are present in ephemeral, intermittent, and perennial waters within LANL boundaries and from reference waters adjacent to LANL. If the species identified from these waters do not generally represent species used in the SSDs, then SSDs may need to be modified and AWQC may need to be updated. This sampling and analysis plan details the sampling methodology, surveillance locations, temporal scheduling, and analytical approaches that will be used to complete aquatic life surveys. A significant portion of this sampling and analysis plan was formalized by referring to Appendix E: SEP Aquatic Life Surveys DQO (Data Quality Objectives).

  8. ANALYSING ACCEPTANCE SAMPLING PLANS BY MARKOV CHAINS

    Directory of Open Access Journals (Sweden)

    Mohammad Mirabi

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this research, a Markov analysis of acceptance sampling plans in a single stage and in two stages is proposed, based on the quality of the items inspected. In a stage of this policy, if the number of defective items in a sample of inspected items is more than the upper threshold, the batch is rejected. However, the batch is accepted if the number of defective items is less than the lower threshold. Nonetheless, when the number of defective items falls between the upper and lower thresholds, the decision-making process continues to inspect the items and collect further samples. The primary objective is to determine the optimal values of the upper and lower thresholds using a Markov process to minimise the total cost associated with a batch acceptance policy. A solution method is presented, along with a numerical demonstration of the application of the proposed methodology.

    AFRIKAANSE OPSOMMING: In hierdie navorsing word ’n Markov-ontleding gedoen van aannamemonsternemingsplanne wat plaasvind in ’n enkele stap of in twee stappe na gelang van die kwaliteit van die items wat geïnspekteer word. Indien die eerste monster toon dat die aantal defektiewe items ’n boonste grens oorskry, word die lot afgekeur. Indien die eerste monster toon dat die aantal defektiewe items minder is as ’n onderste grens, word die lot aanvaar. Indien die eerste monster toon dat die aantal defektiewe items in die gebied tussen die boonste en onderste grense lê, word die besluitnemingsproses voortgesit en verdere monsters word geneem. Die primêre doel is om die optimale waardes van die booonste en onderste grense te bepaal deur gebruik te maak van ’n Markov-proses sodat die totale koste verbonde aan die proses geminimiseer kan word. ’n Oplossing word daarna voorgehou tesame met ’n numeriese voorbeeld van die toepassing van die voorgestelde oplossing.

  9. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  10. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  11. Strategic Planning and Business Performance of Micro, Small and Medium-Sized Enterprises

    Directory of Open Access Journals (Sweden)

    Skokan Karel

    2013-12-01

    Full Text Available This paper deals with issues of strategic management, particularly strategic planning and its beneficial effect on the overall performance of businesses. It is based on empirical results of the original research study called Adaptability of Enterprises to Contemporary Economic Conditions in Years 2007-2012 performed via questionnaire survey in three rounds during years 2011, 2012 and 2013. The analyses presented in the paper were conducted on the second round sample of 677 organizations operating mostly in the SME sector in the Czech and Slovak Republic. The interdependence between the level of strategic planning (existence of strategy in the form of written document and its extent and enterprise performance criteria (turnover, costs, profit, EVA, investments, period of arranged contracts is examined with the use of four hypotheses. The results are commented and discussed. The outcome is the apparent positive impact of full strategic document on the performance criteria of the businesses.

  12. FIRM: Sampling-based feedback motion-planning under motion uncertainty and imperfect measurements

    KAUST Repository

    Agha-mohammadi, A.-a.; Chakravorty, S.; Amato, N. M.

    2013-01-01

    In this paper we present feedback-based information roadmap (FIRM), a multi-query approach for planning under uncertainty which is a belief-space variant of probabilistic roadmap methods. The crucial feature of FIRM is that the costs associated with the edges are independent of each other, and in this sense it is the first method that generates a graph in belief space that preserves the optimal substructure property. From a practical point of view, FIRM is a robust and reliable planning framework. It is robust since the solution is a feedback and there is no need for expensive replanning. It is reliable because accurate collision probabilities can be computed along the edges. In addition, FIRM is a scalable framework, where the complexity of planning with FIRM is a constant multiplier of the complexity of planning with PRM. In this paper, FIRM is introduced as an abstract framework. As a concrete instantiation of FIRM, we adopt stationary linear quadratic Gaussian (SLQG) controllers as belief stabilizers and introduce the so-called SLQG-FIRM. In SLQG-FIRM we focus on kinematic systems and then extend to dynamical systems by sampling in the equilibrium space. We investigate the performance of SLQG-FIRM in different scenarios. © The Author(s) 2013.

  13. FIRM: Sampling-based feedback motion-planning under motion uncertainty and imperfect measurements

    KAUST Repository

    Agha-mohammadi, A.-a.

    2013-11-15

    In this paper we present feedback-based information roadmap (FIRM), a multi-query approach for planning under uncertainty which is a belief-space variant of probabilistic roadmap methods. The crucial feature of FIRM is that the costs associated with the edges are independent of each other, and in this sense it is the first method that generates a graph in belief space that preserves the optimal substructure property. From a practical point of view, FIRM is a robust and reliable planning framework. It is robust since the solution is a feedback and there is no need for expensive replanning. It is reliable because accurate collision probabilities can be computed along the edges. In addition, FIRM is a scalable framework, where the complexity of planning with FIRM is a constant multiplier of the complexity of planning with PRM. In this paper, FIRM is introduced as an abstract framework. As a concrete instantiation of FIRM, we adopt stationary linear quadratic Gaussian (SLQG) controllers as belief stabilizers and introduce the so-called SLQG-FIRM. In SLQG-FIRM we focus on kinematic systems and then extend to dynamical systems by sampling in the equilibrium space. We investigate the performance of SLQG-FIRM in different scenarios. © The Author(s) 2013.

  14. Sampling and Analysis Plan for the 105-N Basin Water

    International Nuclear Information System (INIS)

    R.O. Mahood

    1997-01-01

    This sampling and analysis plan defines the strategy, and field and laboratory methods that will be used to characterize 105-N Basin water. The water will be shipped to the 200 Area Effluent Treatment Facility for treatment and disposal as part of N Reactor deactivation. These analyses are necessary to ensure that the water will meet the acceptance criteria of the ETF, as established in the Memorandum of Understanding for storage and treatment of water from N-Basin (Appendix A), and the characterization requirements for 100-N Area water provided in a letter from ETF personnel (Appendix B)

  15. The role of strategic information systems planning a typical Small or Medium-sized Enterprise

    Directory of Open Access Journals (Sweden)

    Nicky Meyer

    2013-12-01

    Full Text Available Little is known about how Strategic Information Systems Planning (SISP and small and medium-sized enterprises (SMEs are linked in a developing country. SISP has also been a concern for many in the Information Technology (IT industry and IT based businesses as a whole. This research seeks to address this shortcoming by exploring what constitutes a typical SME, what role Information Systems (ISs play in SMEs and what role SISP plays in SMEs. Consequently, a Delphi panel comprising a questionnaire in the first phase and an interview in the second phase was employed. Some correlation was found to exist with the literature, with the exception of the role of IS in SMEs, whether SISP is an on-going activity, and the fact that SISP can be outsourced. Some new facts were discovered, especially on the topic of outsourcing. Keywords: company strategy; strategic information systems planning; small and mediumsized enterprises; SME sustainability; stakeholders and management; Viewpoint Training and Consulting

  16. DETERMINATION OF THE INFLUENCING FACTORS TO MENU PLANNING IN BIG SIZED FOOD&BEVERAGE FIRMS (SAMPLE OF TURKEY)

    OpenAIRE

    SARIOĞLAN, Mehmet

    2016-01-01

    AbstractIndividuals who spend their time in destinations far away from their home because of the facts such as business, education, health, vacation and meeting has brought about eating habit in locations (outdoors). Since that increasing rate of eating habits outdoors of the individuals day by day, bigsized Food&Beverage firms has been showed up. Big sized Food&Beverage firms become influential as a supportive role on producing factors of the public, private or corporate ente...

  17. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  18. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    International Nuclear Information System (INIS)

    Haugboel, Steven; Pinborg, Lars H.; Arfan, Haroon M.; Froekjaer, Vibe M.; Svarer, Claus; Knudsen, Gitte M.; Madsen, Jacob; Dyrby, Tim B.

    2007-01-01

    To determine the reproducibility of measurements of brain 5-HT 2A receptors with an [ 18 F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects reproducibility and the required sample size. For assessment of the variability, six subjects were investigated with [ 18 F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [ 18 F]altanserin PET studies in 84 healthy subjects. Regions of interest were automatically delineated on co-registered MR and PET images. In cortical brain regions with a high density of 5-HT 2A receptors, the outcome parameter (binding potential, BP 1 ) showed high reproducibility, with a median difference between the two group measurements of 6% (range 5-12%), whereas in regions with a low receptor density, BP 1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. This study demonstrates that [ 18 F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT 2A receptor density. Moreover, partial volume correction considerably reduces the sample size required to detect regional changes between groups. (orig.)

  19. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  20. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  1. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    Science.gov (United States)

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  2. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  3. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    International Nuclear Information System (INIS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-01-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers

  4. Social class and family size as determinants of attributed machismo, femininity, and family planning: a field study in two South American communities.

    Science.gov (United States)

    Nicassio, P M

    1977-12-01

    A study was conducted to determine the way in which stereotypes of machismo and femininity are associated with family size and perceptions of family planning. A total of 144 adults, male and female, from a lower class and an upper middle class urban area in Colombia were asked to respond to photographs of Colombian families varying in size and state of completeness. The study illustrated the critical role of sex-role identity and sex-role organization as variables having an effect on fertility. The lower-class respondents described parents in the photographs as significantly more macho or feminine because of their children than the upper-middle-class subjects did. Future research should attempt to measure when this drive to sex-role identity is strongest, i.e., when men and women are most driven to reproduce in order to "prove" themselves. Both lower- and upper-middle-class male groups considered male dominance in marriage to be directly linked with family size. Perceptions of the use of family planning decreased linearly with family size for both social groups, although the lower-class females attributed more family planning to spouses of large families than upper-middle-class females. It is suggested that further research deal with the ways in which constructs of machismo and male dominance vary between the sexes and among socioeconomic groups and the ways in which they impact on fertility.

  5. Overview of the Mars Sample Return Earth Entry Vehicle

    Science.gov (United States)

    Dillman, Robert; Corliss, James

    2008-01-01

    NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.

  6. A Decision Support Tool for Planning the Sampling of Tank 19

    International Nuclear Information System (INIS)

    Edwards, T.B.

    2002-01-01

    The Statistical Consulting Section (SCS) of the Savannah River Technology Center (SRTC) was asked to develop a decision support system (DSS) to aid in the planning of the process to characterize the heel of Tank 19. This characterization is to be based on samples of the tank heel, and the DSS is to be used to help determine the number of samples that might be needed to provide a meaningful characterization based upon assumptions for relevant variations and volumes. The objective of this report is to describe the framework used to develop the DSS and to document its calculations. The DSS was developed as a Microsoft Excel spreadsheet. Detailed input and output for two test cases are provided in the appendix as part of the documentation of this spreadsheet

  7. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  8. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Sampling and analysis plan for the 107-N Basin recirculation building liquid/sediment

    International Nuclear Information System (INIS)

    Thompson, W.S.

    1997-02-01

    This sampling and analysis plan (SAP) defines the strategy and the field and laboratory methods that will be used to characterize the liquids and sediment within the following four 100-N buildings/areas: 1310-N valve pit area inside the Radioactive Chemical Waste Treatment Pump House (silo); 1314-N Waste Pump (Overflow) Tank at the Liquid Waste Disposal Station; 105-N Lift Station pump well and valve pit areas inside the 105-N Reactor Building; and 107-N Basin Recirculation Building. This characterization activity is being performed in support of the work scope identified in the N Reactor Deactivation Program Plan and the sampling strategy in the DQO. The characterization data will be used for comparison to decontamination and decommissioning (D and D) endpoint acceptance criteria in preparation for turnover of the above-listed facilities to the D and D project. The data will also be used for waste acceptance criteria evaluation in the selection of the appropriate disposition option for liquid and sediment currently contained in the facilities

  10. Factors that influence planning for physical activity among workers in small- and medium-sized enterprises.

    Science.gov (United States)

    Kawahara, Sawako; Tadaka, Etsuko; Okochi, Ayako

    2018-06-01

    Physical activity (PA) is necessary for improving the health of workers in small- to medium-sized enterprises (SMEs). However, behavioral changes conducive to PA are often difficult to achieve despite intentions. Because intention to perform PA does not always translate to action, proper planning may be critical for achieving PA. In this study, we aimed to identify factors related to planning for PA among workers in SMEs because this is one population that has been identified as being at higher risk for lifestyle-related diseases in Japan. Participants completed a series of validated questionnaires. Of 353 valid responses, 226 individuals (149 men; aged 47.5 ± 8.7 years) stated their intention to perform PA. Multiple regression analysis indicated that a higher PA planning score was significantly associated with higher self-efficacy for PA ( p  < 0.001), higher risk perception regarding inactivity ( p  = 0.012), and greater knowledge of information about PA community services ( p  = 0.019). Therefore, we recommend that self-efficacy, risk perception, and information regarding PA community services are enhanced in the daily working lives of workers at their workplaces. In this manner, they can promote their planning of health behavioral changes in a supportive environment, drawing upon available services, supports, and other resources.

  11. Seeps and springs sampling and analysis plan for the environmental monitoring plan for Waste Area Grouping 6 at Oak Ridge National Laboratory, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    This Sampling and Analysis Plan addresses the monitoring, sampling, and analysis activities that will be conducted at seeps and springs and at two french drain outlets in support of the Environmental Monitoring Plan for Waste Area Grouping (WAG) 6. WAG 6 is a shallow-land-burial disposal facility for low-level radioactive waste at Oak Ridge National Laboratory, a research facility owned by the US Department of Energy and operated by Martin Marietta Energy Systems, Inc. Initially, sampling will be conducted at as many as 15 locations within WAG 6 (as many as 13 seeps and 2 french drain outlets). After evaluating the results obtained and reviewing the observations made by field personnel during the first round of sampling, several seeps and springs will be chosen as permanent monitoring points, together with the two french drain outlets. Baseline sampling of these points will then be conducted quarterly for 1 year (i.e., four rounds of sampling after the initial round). The samples will be analyzed for various geochemical, organic, inorganic, and radiological parameters. Permanent sampling points having suitable flow rates and conditions may be outfitted with automatic flow-monitoring equipment. The results of the sampling and flow-monitoring efforts will help to quantify flux moving across the ungauged perimeter of the site and will help to identify changes in releases from the contaminant sources.

  12. Seeps and springs sampling and analysis plan for the environmental monitoring plan for Waste Area Grouping 6 at Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    1994-08-01

    This Sampling and Analysis Plan addresses the monitoring, sampling, and analysis activities that will be conducted at seeps and springs and at two french drain outlets in support of the Environmental Monitoring Plan for Waste Area Grouping (WAG) 6. WAG 6 is a shallow-land-burial disposal facility for low-level radioactive waste at Oak Ridge National Laboratory, a research facility owned by the US Department of Energy and operated by Martin Marietta Energy Systems, Inc. Initially, sampling will be conducted at as many as 15 locations within WAG 6 (as many as 13 seeps and 2 french drain outlets). After evaluating the results obtained and reviewing the observations made by field personnel during the first round of sampling, several seeps and springs will be chosen as permanent monitoring points, together with the two french drain outlets. Baseline sampling of these points will then be conducted quarterly for 1 year (i.e., four rounds of sampling after the initial round). The samples will be analyzed for various geochemical, organic, inorganic, and radiological parameters. Permanent sampling points having suitable flow rates and conditions may be outfitted with automatic flow-monitoring equipment. The results of the sampling and flow-monitoring efforts will help to quantify flux moving across the ungauged perimeter of the site and will help to identify changes in releases from the contaminant sources

  13. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  14. Groundwater Quality Sampling and Analysis Plan for Environmental Monitoring Waste Area Grouping 6 at Oak Ridge National Laboratory. Environmental Restoration Program

    International Nuclear Information System (INIS)

    1995-09-01

    This Sampling and Analysis Plan addresses groundwater quality sampling and analysis activities that will be conducted in support of the Environmental Monitoring Plan for Waste Area Grouping (WAG) 6. WAG 6 is a shallow-burial land disposal facility for low-level radioactive waste at the Oak Ridge National Laboratory, a research facility owned by the US Department of Energy and managed by Martin Marietta Energy Systems, Inc. (Energy Systems). Groundwater sampling will be conducted by Energy Systems at 45 wells within WAG 6. The samples will be analyzed for various organic, inorganic, and radiological parameters. The information derived from the groundwater quality monitoring, sampling, and analysis will aid in evaluating relative risk associated with contaminants migrating off-WAG, and also will fulfill Resource Conservation and Recovery Act (RCRA) interim permit monitoring requirements. The sampling steps described in this plan are consistent with the steps that have previously been followed by Energy Systems when conducting RCRA sampling

  15. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    Science.gov (United States)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  16. SU-F-T-528: Relationship Between Tumor Size and Plan Quality Using FFF and Non-FFF Modes in Rapidarc

    Energy Technology Data Exchange (ETDEWEB)

    Chen, F [Koo Foundation Sun Yat-Sen Cancer Center, Taipei, Taiwan (China)

    2016-06-15

    Purpose: For a give PTV dose, beam-on time is shorter in the FFF than the non-FFF mode because of higher MU/min. Larger tumors usually require more complex intensity modulation, which might affect plan quality and total MU. We investigated the relationship between PTV size and plan quality using FFF and non-FFF modes. Methods: Two different PTV volumes (PTV and PTV+1 cm margin) were drawn in brain, lung and liver. 3-full to 7-partial arc (Rapidarc) of 6 MV, 1400 MU/min were studied. Plan quality was evaluated by: (a) DVH for PTV and normal tissues, (b) total MU and beam-on time, and (c) passing rate for IMRT plan QA. Results: For the same PTV coverage, DVH for normal tissue was the same or slightly lower in the FFF compared with non-FFF. Total MU was 13% higher in FFF than non-FFF in the 3-arc, 7 Gy treatment, but the difference became smaller when arc number increased to 6–7 for 10–24 Gy. Larger PTV did not affect the difference in the total MU. FFF required a short beam-on time and the ratio of FFF and non-FFF was 0.34 to 0.88 for 7- and 3-arc, respectively. For larger PTV, the ratio increased to 0.45–0.90. Ratio of total MU for large PTV was 3–8% lower in the non-FFF plans. Although the small difference in MU, beam-on time was 1.1 to-1.6 times longer in the 3- and 7-arc non-FFF plans. Plan verification showed the similar gamma index passing rate. Conclusion: While total MU was similar with FFF and non-FFF modes, the beam-on time was shorter in the FFF treatment. The advantage of FFF was greater in treatments with high dose per fraction using more arc numbers. For dose less than 10 Gy, using FFF and non-FFF modes, tumor size did not affect the relationship of total MU, beam-on time.

  17. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    Science.gov (United States)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM

  18. Comprehensive work plan and health and safety plan for the 7500 Area Contamination Site sampling at Oak Ridge National Laboratory, Oak Ridge, Tennessee. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Burman, S.N.; Landguth, D.C.; Uziel, M.S.; Hatmaker, T.L.; Tiner, P.F.

    1992-05-01

    As part of the Environmental Restoration Program sponsored by the US Department of Energy`s Office of Environmental Restoration and Waste Management, this plan has been developed for the environmental sampling efforts at the 7500 Area Contamination Site, Oak Ridge National Laboratory (ORNL), Oak Ridge, Tennessee. This plan was developed by the Measurement Applications and Development Group (MAD) of the Health and Safety Research Division of ORNL and will be implemented by ORNL/MAD. Major components of the plan include (1) a quality assurance project plan that describes the scope and objectives of ORNL/MAD activities at the 7500 Area Contamination Site, assigns responsibilities, and provides emergency information for contingencies that may arise during field operations; (2) sampling and analysis sections; (3) a site-specific health and safety section that describes general site hazards, hazards associated with specific tasks, personnel protection requirements, and mandatory safety procedures; (4) procedures and requirements for equipment decontamination and responsibilities for generated wastes, waste management, and contamination control; and (5) a discussion of form completion and reporting required to document activities at the 7500 Area Contamination Site.

  19. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  20. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.