WorldWideScience

Sample records for calculated sample size

  1. Sample Size Calculations

    OpenAIRE

    Noordzij, Marlies; Dekker, Friedo W.; Zoccali, Carmine; Jager, Kitty J.

    2011-01-01

    The sample size is the number of patients or other experimental units that need to be included in a study to answer the research question. Pre-study calculation of the sample size is important; if a sample size is too small, one will not be able to detect an effect, while a sample that is too large may be a waste of time and money. Methods to calculate the sample size are explained in statistical textbooks, but because there are many different formulas available, it can be difficult for inves...

  2. Sample size calculation in medical studies

    OpenAIRE

    Pourhoseingholi, Mohamad Amin; Vahedi, Mohsen; Rahimzadeh, Mitra

    2013-01-01

    Optimum sample size is an essential component of any research. The main purpose of the sample size calculation is to determine the number of samples needed to detect significant changes in clinical parameters, treatment effects or associations after data gathering. It is not uncommon for studies to be underpowered and thereby fail to detect the existing treatment effects due to inadequate sample size. In this paper, we explain briefly the basic principles of sample size calculations in medica...

  3. How to Calculate Sample Size and Why

    OpenAIRE

    Kim, Jeehyoung; Seo, Bong Soo

    2013-01-01

    Why Calculating the sample size is essential to reduce the cost of a study and to prove the hypothesis effectively. How Referring to pilot studies and previous research studies, we can choose a proper hypothesis and simplify the studies by using a website or Microsoft Excel sheet that contains formulas for calculating sample size in the beginning stage of the study. More There are numerous formulas for calculating the sample size for complicated statistics and studies, but most studies can us...

  4. Statistics review 4: Sample size calculations

    OpenAIRE

    Whitley, Elise; Ball, Jonathan

    2002-01-01

    The present review introduces the notion of statistical power and the hazard of under-powered studies. The problem of how to calculate an ideal sample size is also discussed within the context of factors that affect power, and specific methods for the calculation of sample size are presented for two common scenarios, along with extensions to the simplest case.

  5. Algorithms to calculate sample sizes for inspection sampling plans

    International Nuclear Information System (INIS)

    The problem is to determine inspection sample sizes for a given stratum. The sample sizes are based on applying the verification data in an attributes mode such that detection consists of identifying one or more defects in the sample. The sample sizes are such that the probability of detection is no less than the design value, 1-β, for all of the values of the defect size when, in fact, it is possible to achieve this detection probability without an unreasonable number of verification samples. A computing algorithm is developed to address the problem. Up to three measurement methods, or measuring instruments, are accommodated by the algorithm. The algorithm is optimal in the sense that an initial set of sample sizes is found to ensure a detection probability of 1-β at those defect sizes that result in the smallest numbers of samples for the more precise measurement methods. The detection probability is then calculated for a range of defect sizes covering the entire range of possibilities, and an iterative procedure is applied until the detection probability is no less than 1-β (if possible) at its maximum value. The algorithm, while not difficult in concept, realistically requires a personal computer (PC) to implement. For those instances when a PC may not be available, approximation formulas are developed which permit sample size calculations using only a pocket calculator. (author). Refs and tabs

  6. How to Calculate Sample Size in Randomized Controlled Trial?

    OpenAIRE

    ZHONG, Baoliang

    2009-01-01

    To design clinical trials, efficiency, ethics, cost effectively, research duration and sample size calculations are the key things to remember. This review highlights the statistical issues to estimate the sample size requirement. It elaborates the theory, methods and steps for the sample size calculation in randomized controlled trials. It also emphasizes that researchers should consider the study design first and then choose appropriate sample size calculation method.

  7. Reporting of sample size calculation in randomised controlled trials: review

    OpenAIRE

    Charles, Pierre; Giraudeau, Bruno; Dechartres, Agnes; Baron, Gabriel; Ravaud, Philippe

    2009-01-01

    Objectives To assess quality of reporting of sample size calculation, ascertain accuracy of calculations, and determine the relevance of assumptions made when calculating sample size in randomised controlled trials. Design Review. Data sources We searched MEDLINE for all primary reports of two arm parallel group randomised controlled trials of superiority with a single primary outcome published in six high impact factor general medical journals between 1 January 2005 and 31 December 2006. All...

  8. How to calculate sample size in animal studies?

    OpenAIRE

    Jaykaran Charan; N D Kantharia

    2013-01-01

    Calculation of sample size is one of the important component of design of any research including animal studies. If a researcher select less number of animals it may lead to missing of any significant difference even if it exist in population and if more number of animals selected then it may lead to unnecessary wastage of resources and may lead to ethical issues. In this article, on the basis of review of literature done by us we suggested few methods of sample size calculations for animal s...

  9. Simple nomograms to calculate sample size in diagnostic studies

    OpenAIRE

    Carley, S; Dosman, S; Jones, S; Harrison, M

    2005-01-01

    Objectives: To produce an easily understood and accessible tool for use by researchers in diagnostic studies. Diagnostic studies should have sample size calculations performed, but in practice, they are performed infrequently. This may be due to a reluctance on the part of researchers to use mathematical formulae.

  10. GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices

    OpenAIRE

    Munjal, Aarti; Sakhadeo, Uttara R.; Keith E. Muller; Glueck, Deborah H.; Kreidler, Sarah M

    2014-01-01

    Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical soft...

  11. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  12. Clinical audit for occupational therapy intervention for children with autism spectrum disorder: sampling steps and sample size calculation

    OpenAIRE

    Weeks, Scott; Atlas, Alvin

    2015-01-01

    A priori sample size calculations are used to determine the adequate sample size to estimate the prevalence of the target population with good precision. However, published audits rarely report a priori calculations for their sample size. This article discusses a process in health services delivery mapping to generate a comprehensive sampling frame, which was used to calculate an a priori sample size for a targeted clinical record audit. We describe how we approached methodological and defini...

  13. Sample Size Calculation for Time-Averaged Differences in the Presence of Missing Data

    OpenAIRE

    Zhang, Song; Ahn, Chul

    2012-01-01

    Sample size calculations based on two-sample comparisons of slopes in repeated measurements have been reported by many investigators. In contrast, the literature has paid relatively little attention to the sample size calculations for time-averaged differences in the presence of missing data in repeated measurements studies. Diggle et al. (2002) provided a sample size formula comparing time-averaged differences for continuous outcomes in repeated measurement studies assuming no missing data a...

  14. patients. Calculated sample size (target population): 1000 patients

    DEFF Research Database (Denmark)

    Jensen, Jens-Ulrik; Lundgren, Bettina; Hein, Lars;

    2008-01-01

    and signs may present atypically. The established biological markers of inflammation (leucocytes, C-reactive protein) may often be influenced by other parameters than infection, and may be unacceptably slowly released after progression of an infection. At the same time, lack of a relevant...... hypertriglyceridaemia, 2) Likely that safety is compromised by blood sampling, 3) Pregnant or breast feeding.Computerized Randomisation: Two arms (1:1), n = 500 per arm: Arm 1: standard of care. Arm 2: standard of care and Procalcitonin guided diagnostics and treatment of infection.Primary Trial Objective: To address......-guided strategy compared to the best standard of care, is conducted in an Intensive care setting. Results will, with a high statistical power answer the question: Can the survival of critically ill patients be improved by actively using biomarker procalcitonin in the treatment of infections? 700 critically ill...

  15. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for NYTD... applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...

  16. Sample Size Calculation for Clustered Binary Data with Sign Tests Using Different Weighting Schemes

    OpenAIRE

    Ahn, Chul; Hu, Fan; Schucany, William R.

    2011-01-01

    We propose a sample size calculation approach for testing a proportion using the weighted sign test when binary observations are dependent within a cluster. Sample size formulas are derived with nonparametric methods using three weighting schemes: equal weights to observations, equal weights to clusters, and optimal weights that minimize the variance of the estimator. Sample size formulas are derived incorporating intracluster correlation and the variability in cluster sizes. Simulation studi...

  17. Clinical audit for occupational therapy intervention for children with autism spectrum disorder: sampling steps and sample size calculation.

    Science.gov (United States)

    Weeks, Scott; Atlas, Alvin

    2015-01-01

    A priori sample size calculations are used to determine the adequate sample size to estimate the prevalence of the target population with good precision. However, published audits rarely report a priori calculations for their sample size. This article discusses a process in health services delivery mapping to generate a comprehensive sampling frame, which was used to calculate an a priori sample size for a targeted clinical record audit. We describe how we approached methodological and definitional issues in the following steps: (1) target population definition, (2) sampling frame construction, and (3) a priori sample size calculation. We recommend this process for clinicians, researchers, or policy makers when detailed information on a reference population is unavailable. PMID:26122044

  18. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    Science.gov (United States)

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  19. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention. PMID:25019136

  20. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

    Science.gov (United States)

    Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

    2013-08-01

    Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size. PMID:23934314

  1. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research. PMID:22292397

  2. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  3. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  4. Finding Alternatives to the Dogma of Power Based Sample Size Calculation: Is a Fixed Sample Size Prospective Meta-Experiment a Potential Alternative?

    Science.gov (United States)

    Tavernier, Elsa; Trinquart, Ludovic; Giraudeau, Bruno

    2016-01-01

    Sample sizes for randomized controlled trials are typically based on power calculations. They require us to specify values for parameters such as the treatment effect, which is often difficult because we lack sufficient prior information. The objective of this paper is to provide an alternative design which circumvents the need for sample size calculation. In a simulation study, we compared a meta-experiment approach to the classical approach to assess treatment efficacy. The meta-experiment approach involves use of meta-analyzed results from 3 randomized trials of fixed sample size, 100 subjects. The classical approach involves a single randomized trial with the sample size calculated on the basis of an a priori-formulated hypothesis. For the sample size calculation in the classical approach, we used observed articles to characterize errors made on the formulated hypothesis. A prospective meta-analysis of data from trials of fixed sample size provided the same precision, power and type I error rate, on average, as the classical approach. The meta-experiment approach may provide an alternative design which does not require a sample size calculation and addresses the essential need for study replication; results may have greater external validity. PMID:27362939

  5. Sample size calculation for microarray experiments with blocked one-way design

    Directory of Open Access Journals (Sweden)

    Jung Sin-Ho

    2009-05-01

    Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.

  6. Sample size and power calculations for correlations between bivariate longitudinal data

    OpenAIRE

    Comulada, W. Scott; Weiss, Robert E.

    2010-01-01

    The analysis of a baseline predictor with a longitudinally measured outcome is well established and sample size calculations are reasonably well understood. Analysis of bivariate longitudinally measured outcomes is gaining in popularity and methods to address design issues are required. The focus in a random effects model for bivariate longitudinal outcomes is on the correlations that arise between the random effects and between the bivariate residuals. In the bivariate random effects model, ...

  7. Sample size calculation for treatment effects in randomized trials with fixed cluster sizes and heterogeneous intraclass correlations and variances.

    Science.gov (United States)

    Candel, Math J J M; van Breukelen, Gerard J P

    2015-10-01

    When comparing two different kinds of group therapy or two individual treatments where patients within each arm are nested within care providers, clustering of observations may occur in both arms. The arms may differ in terms of (a) the intraclass correlation, (b) the outcome variance, (c) the cluster size, and (d) the number of clusters, and there may be some ideal group size or ideal caseload in case of care providers, fixing the cluster size. For this case, optimal cluster numbers are derived for a linear mixed model analysis of the treatment effect under cost constraints as well as under power constraints. To account for uncertain prior knowledge on relevant model parameters, also maximin sample sizes are given. Formulas for sample size calculation are derived, based on the standard normal as the asymptotic distribution of the test statistic. For small sample sizes, an extensive numerical evaluation shows that in a two-tailed test employing restricted maximum likelihood estimation, a safe correction for both 80% and 90% power, is to add three clusters to each arm for a 5% type I error rate and four clusters to each arm for a 1% type I error rate. PMID:25519890

  8. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  9. Sample size calculations for micro-randomized trials in mHealth.

    Science.gov (United States)

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A

    2016-05-30

    The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26707831

  10. Confidence intervals and sample size calculations for studies of film-reading performance

    International Nuclear Information System (INIS)

    The relaxation of restrictions on the type of professions that can report films has resulted in radiographers and other healthcare professionals becoming increasingly involved in image interpretation in areas such as mammography, ultrasound and plain-film radiography. Little attention, however, has been given to sample size determinations concerning film-reading performance characteristics such as sensitivity, specificity and accuracy. Illustrated with hypothetical examples, this paper begins by considering standard errors and confidence intervals for performance characteristics and then discusses methods for determining sample size for studies of film-reading performance. Used appropriately, these approaches should result in studies that produce estimates of film-reading performance with adequate precision and enable investigators to optimize the sample size in their studies for the question they seek to answer. Scally, A. J. and Brealey S. (2003). Clinical Radiology 58, 238-246

  11. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.;

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved in 1...

  12. Phylogenetic effective sample size

    OpenAIRE

    Bartoszek, Krzysztof

    2015-01-01

    In this paper I address the question - how large is a phylogenetic sample I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes - the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find...

  13. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    OpenAIRE

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-01-01

    Background: Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rw...

  14. Empirical power and sample size calculations for cluster-randomized and cluster-randomized crossover studies.

    Directory of Open Access Journals (Sweden)

    Nicholas G Reich

    Full Text Available In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of cluster-randomized and cluster-randomized crossover trial design: estimating statistical power. We present a general framework for estimating power via simulation in cluster-randomized studies with or without one or more crossover periods. We have implemented this framework in the clusterPower software package for R, freely available online from the Comprehensive R Archive Network. Our simulation framework is easy to implement and users may customize the methods used for data analysis. We give four examples of using the software in practice. The clusterPower package could play an important role in the design of future cluster-randomized and cluster-randomized crossover studies. This work is the first to establish a universal method for calculating power for both cluster-randomized and cluster-randomized clinical trials. More research is needed to develop standardized and recommended methodology for cluster-randomized crossover studies.

  15. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  16. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  17. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    Science.gov (United States)

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  18. Determination of Sample Size

    OpenAIRE

    Naing, Nyi Nyi

    2003-01-01

    There is a particular importance of determining a basic minimum required ‘n’ size of the sample to recognize a particular measurement of a particular population. This article has highlighted the determination of an appropriate size to estimate population parameters.

  19. Design, statistical analysis and sample size calculation of a phase IIb/III study of linagliptin versus voglibose and placebo

    Directory of Open Access Journals (Sweden)

    Hayashi Naoyuki

    2009-09-01

    Full Text Available Abstract Background Many patients with diabetes mellitus (DM require a combination of antidiabetic drugs with complementary mechanisms of action to lower their hemoglobin A1c levels to achieve therapeutic targets and reduce the risk of cardiovascular complications. Linagliptin is a novel member of the dipeptidyl peptidase-4 (DPP-4 inhibitor class of antidiabetic drugs. DPP-4 inhibitors increase incretin (glucagon-like peptide-1 and gastric inhibitory polypeptide levels, inhibit glucagon release and, more importantly, increase insulin secretion and inhibit gastric emptying. Currently, phase III clinical studies with linagliptin are underway to evaluate its clinical efficacy and safety. Linagliptin is expected to be one of the most appropriate therapies for Japanese patients with DM, as deficient insulin secretion is a greater concern than insulin resistance in this population. The number of patients with DM in Japan is increasing and this trend is predicted to continue. Several antidiabetic drugs are currently marketed in Japan; however there is no information describing the effective dose of linagliptin for Japanese patients with DM. Methods This prospective, randomized, double-blind study will compare linagliptin with placebo over a 12-week period. The study has also been designed to evaluate the safety and efficacy of linagliptin by comparing it with another antidiabetic, voglibose, over a 26-week treatment period. Four treatment groups have been established for these comparisons. A phase IIb/III combined study design has been utilized for this purpose and the approach for calculating sample size is described. Discussion This is the first phase IIb/III study to examine the long-term safety and efficacy of linagliptin in diabetes patients in the Japanese population. Trial registration Clinicaltrials.gov (NCT00654381.

  20. Calculating Optimal Inventory Size

    Directory of Open Access Journals (Sweden)

    Ruby Perez

    2010-01-01

    Full Text Available The purpose of the project is to find the optimal value for the Economic Order Quantity Model and then use a lean manufacturing Kanban equation to find a numeric value that will minimize the total cost and the inventory size.

  1. Sample size: from formulae to concepts - II

    Directory of Open Access Journals (Sweden)

    Rakesh R. Pathak

    2013-02-01

    Full Text Available Sample size formulae need some input data or to say it otherwise we need some parameters to calculate sample size. This second part on the formula explanation gives ideas of Z, population size, precision of error, standard deviation, contingency etc which influence sample size. [Int J Basic Clin Pharmacol 2013; 2(1.000: 94-95

  2. Sample size determination for clinical research

    OpenAIRE

    Wang, Duolao; Bakhai, Ameet; Del Buono, Angelo; Maffulli, Nicola

    2013-01-01

    Calculating the sample size is a most important determinant of statistical power of a study. A study with inadequate power, unless being conducted as a safety and feasibility study, is unethical. However, sample size calculation is not an exact science, and therefore it is important to make realistic and well researched assumptions before choosing an appropriate sample size accounting for dropouts and also including a plan for interim analyses during the study to amend the final sample size.

  3. Sample size estimation in epidemiologic studies

    OpenAIRE

    Hajian-Tilaki, Karimollah

    2011-01-01

    This review basically provided a conceptual framework for sample size calculation in epidemiologic studies with various designs and outcomes. The formula requirement of sample size was drawn based on statistical principles for both descriptive and comparative studies. The required sample size was estimated and presented graphically with different effect sizes and power of statistical test at 95% confidence level. This would help the clinicians to decide and ascertain a suitable sample size in...

  4. Sample size matters: A guide for urologists

    OpenAIRE

    Hinh, Peter; Canfield, Steven E

    2011-01-01

    Understanding sample size calculation is vitally important for planning and conducting clinical research, and critically appraising literature. The purpose of this paper is to present basic statistical concepts and tenets of study design pertaining to calculation of requisite sample size. This paper also discusses the significance of sample size calculation in the context of ethical considerations. Scenarios applicable to urology are utilized in presenting concepts.

  5. Basic Statistical Concepts for Sample Size Estimation

    Directory of Open Access Journals (Sweden)

    Vithal K Dhulkhed

    2008-01-01

    Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.

  6. Calculating body frame size (image)

    Science.gov (United States)

    Body frame size is determined by a person's wrist circumference in relation to his height. For example, a man ... would fall into the small-boned category. Determining frame size: To determine the body frame size, measure ...

  7. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Earlier INMM paper have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, the authors have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed, and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments

  8. Sample Size Dependent Species Models

    OpenAIRE

    Zhou, Mingyuan; Walker, Stephen G.

    2014-01-01

    Motivated by the fundamental problem of measuring species diversity, this paper introduces the concept of a cluster structure to define an exchangeable cluster probability function that governs the joint distribution of a random count and its exchangeable random partitions. A cluster structure, naturally arising from a completely random measure mixed Poisson process, allows the probability distribution of the random partitions of a subset of a sample to be dependent on the sample size, a dist...

  9. Patient acceptability of larval therapy for leg ulcer treatment: a randomised survey to inform the sample size calculation of a randomised trial

    Directory of Open Access Journals (Sweden)

    Iglesias CP

    2006-09-01

    evidence of widespread resistance to the utilisation of larval therapy from patients regardless of the method of larval therapy containment. These methods have the potential to inform sample size calculations where there are concerns of patient acceptability.

  10. A comparative analysis of calculating sample size for carrying out household surveys when constructing a passenger origin-destiny matrix between a sample design and applying a population percentage

    Directory of Open Access Journals (Sweden)

    Carlos Fabián Flórez Valero

    2010-04-01

    Full Text Available Using a percentage of a city’s households is a common practice in transport engineering leading to knowing the inhabit- ants’ lourney pattern. The procedure theoretically consists of calculating the sample based on the statistical parameters of population variable which one wishes to measure. This requires carrying out a pilot survey which cannot be done in countries having few resources because of the costs involved in knowing the value of such population parameters, because resources are sometimes exclusively destined to making an estimated sample according to a pre-established percentage. Percentages between 3% and 6% are usually used in Colombian cities, depending on population size. The city of Manizales (located 300 km to the west of Colombia’s capital carried out two household surveys in less than four years; when the second survey was carried out the values of the estimator parameters were thus already known. The Manizales’ mayor’s office made an agreement with the Universidad Nacional de Colombia for drawing up the new origin-destiny matrix, where it was possible to calculate the sample based on the pertinent statistical variables. The article makes a comparative analysis of both methodologies, concluding that when statistically estimating the sample it is possible to greatly reduce the number of surveys to be carried out, but obtaining practically equal results.

  11. Sample sizes for confidence limits for reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L.

    2010-02-01

    We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.

  12. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  13. Monte Carlo small-sample perturbation calculations

    International Nuclear Information System (INIS)

    Two different Monte Carlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard Monte Carlo perturbation method in which neutrons are steered towards the sample by roulette and splitting. One finds, however, that two variance reduction methods are required to make this sort of perturbation calculation feasible. First, neutrons that have passed through the sample must be exempted from roulette. Second, neutrons must be forced to undergo scattering collisions in the sample. Even when such methods are invoked, however, it is still necessary to exaggerate the volume fraction of the sample by drastically reducing the size of the core. The benchmark calculations are then used to test more approximate methods, and not directly to analyze experiments. In the second method the flux at the surface of the sample is assumed to be known. Neutrons entering the sample are drawn from this known flux and tracking by Monte Carlo. The effect of the sample or the fission rate is then inferred from the histories of these neutrons. The characteristics of both of these methods are explored empirically

  14. Sample-size requirements for evaluating population size structure

    Science.gov (United States)

    Vokoun, J.C.; Rabeni, C.F.; Stanovick, J.S.

    2001-01-01

    A method with an accompanying computer program is described to estimate the number of individuals needed to construct a sample length-frequency with a given accuracy and precision. First, a reference length-frequency assumed to be accurate for a particular sampling gear and collection strategy was constructed. Bootstrap procedures created length-frequencies with increasing sample size that were randomly chosen from the reference data and then were compared with the reference length-frequency by calculating the mean squared difference. Outputs from two species collected with different gears and an artificial even length-frequency are used to describe the characteristics of the method. The relations between the number of individuals used to construct a length-frequency and the similarity to the reference length-frequency followed a negative exponential distribution and showed the importance of using 300-400 individuals whenever possible.

  15. At what sample size do correlations stabilize?

    OpenAIRE

    Schönbrodt, Felix D.; Perugini, Marco

    2013-01-01

    Sample correlations converge to the population value with increasing sample size, but the estimates are often inaccurate in small samples. In this report we use Monte-Carlo simulations to determine the critical sample size from which on the magnitude of a correlation can be expected to be stable. The necessary sample size to achieve stable estimates for correlations depends on the effect size, the width of the corridor of stability (i.e., a corridor around the true value where deviations are ...

  16. Sample size and power analysis in medical research

    Directory of Open Access Journals (Sweden)

    Zodpey Sanjay

    2004-03-01

    Full Text Available Among the questions that a researcher should ask when planning a study is "How large a sample do I need?" If the sample size is too small, even a well conducted study may fail to answer its research question, may fail to detect important effects or associations, or may estimate those effects or associations too imprecisely. Similarly, if the sample size is too large, the study will be more difficult and costly, and may even lead to a loss in accuracy. Hence, optimum sample size is an essential component of any research. When the estimated sample size can not be included in a study, post-hoc power analysis should be carried out. Approaches for estimating sample size and performing power analysis depend primarily on the study design and the main outcome measure of the study. There are distinct approaches for calculating sample size for different study designs and different outcome measures. Additionally, there are also different procedures for calculating sample size for two approaches of drawing statistical inference from the study results, i.e. confidence interval approach and test of significance approach. This article describes some commonly used terms, which need to be specified for a formal sample size calculation. Examples for four procedures (use of formulae, readymade tables, nomograms, and computer software, which are conventionally used for calculating sample size, are also given

  17. Sample Size Estimation in Clinical Trial

    OpenAIRE

    Tushar Vijay Sakpal

    2010-01-01

    Every clinical trial should be planned. This plan should include the objective of trial, primary and secondary end-point, method of collecting data, sample to be included, sample size with scientific justification, method of handling data, statistical methods and assumptions. This plan is termed as clinical trial protocol. One of the key aspects of this protocol is sample size estimation. The aim of this article is to discuss how important sample size estimation is for a clinical trial, and a...

  18. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  19. Theoretical Mass Size Distribution of Wet Particles Calculated from Ambient Aerosol Sampled upon Dry Conditions during Summer and Winter Campaign 2008

    Czech Academy of Sciences Publication Activity Database

    Štefancová, Lucia; Schwarz, Jaroslav; Maenhaut, W.; Smolík, Jiří

    Praha: Česká aerosolová společnost, 2008, s. 29-30. ISBN 978-80-86186-17-7. [konference České aerosolové společnosti /9./. Praha (CZ), 04.12.2008] R&D Projects: GA MŠk OC 106; GA MŠk ME 941 Institutional research plan: CEZ:AV0Z40720504 Keywords : mass size distribution * urban aerosol * cascade impactor Subject RIV: CF - Physical ; Theoretical Chemistry http://cas.icpf.cas.cz/download/Sbornik_VKCAS_2008.pdf

  20. A mixture model approach to sample size estimation in two-sample comparative microarray experiments

    OpenAIRE

    Bones Atle M; Midelfart Herman; Jørstad Tommy S

    2008-01-01

    Abstract Background Choosing the appropriate sample size is an important step in the design of a microarray experiment, and recently methods have been proposed that estimate sample sizes for control of the False Discovery Rate (FDR). Many of these methods require knowledge of the distribution of effect sizes among the differentially expressed genes. If this distribution can be determined then accurate sample size requirements can be calculated. Results We present a mixture model approach to e...

  1. Angular Size-Redshift: Experiment and Calculation

    CERN Document Server

    Amirkhanyan, V R

    2015-01-01

    In this paper the next attempt is made to clarify the nature of the Euclidean behavior of the boundary in the angular size-redshift cosmological test. It is shown experimentally that this can be explained by the selection determined by anisotropic morphology and anisotropic radiation of extended radio sources. A catalogue of extended radio sources with minimal flux densities of about 0.01 Jy at 1.4 GHz was compiled for conducting the test. Without the assumption of their size evolution, the agreement between the experiment and calculation was obtained both in the Lambda CDM model (Omega_m=0.27 , Omega_v=0.73.) and the Friedman model (Omega = 0.1 ).

  2. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    OpenAIRE

    R. Eric Heidel

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of t...

  3. How to Show that Sample Size Matters

    Science.gov (United States)

    Kozak, Marcin

    2009-01-01

    This article suggests how to explain a problem of small sample size when considering correlation between two Normal variables. Two techniques are shown: one based on graphs and the other on simulation. (Contains 3 figures and 1 table.)

  4. Experimental determination of size distributions: analyzing proper sample sizes

    Science.gov (United States)

    Buffo, A.; Alopaeus, V.

    2016-04-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used.

  5. Sample size for estimating average productive traits of pigeon pea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2016-04-01

    Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.

  6. Finite sample size effects in transformation kinetics

    Science.gov (United States)

    Weinberg, M. C.

    1985-01-01

    The effect of finite sample size on the kinetic law of phase transformations is considered. The case where the second phase develops by a nucleation and growth mechanism is treated under the assumption of isothermal conditions and constant and uniform nucleation rate. It is demonstrated that for spherical particle growth, a thin sample transformation formula given previously is an approximate version of a more general transformation law. The thin sample approximation is shown to be reliable when a certain dimensionless thickness is small. The latter quantity, rather than the actual sample thickness, determines when the usual law of transformation kinetics valid for bulk (large dimension) samples must be modified.

  7. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size.

    Science.gov (United States)

    Heidel, R Eric

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power. PMID:27073717

  8. Sizing the Sample (without Laundering the Data).

    Science.gov (United States)

    Elliot, Brownlee

    1980-01-01

    This article presents a fairly quick, easy way to decide what size sample is needed for a survey, given a desired level of confidence and degree of accuracy. The method proposed will work on any multiple response instrument. A technical mathematical explanation is also included. (GK)

  9. Exploratory Factor Analysis with Small Sample Sizes

    Science.gov (United States)

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  10. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  11. Statistical Analysis Techniques for Small Sample Sizes

    Science.gov (United States)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  12. DNBR limit calculation by sampling statistical method

    International Nuclear Information System (INIS)

    The parametric uncertainties of DNBR and exit quality were calculated using sampling statistical method based on Wilks formula and VIPRE-W code. Then the DNBR design limit and exit quality limit were got by combining with the uncertainties of models and DNB correlation. This method can gain the more DNBR margin than RTDP methodology which is developed by Westinghouse by comparison of these two methods. (authors)

  13. Sample size re-estimation in a breast cancer trial

    Science.gov (United States)

    Hade, Erinn; Jarjoura, David; Wei, Lai

    2016-01-01

    Background During the recruitment phase of a randomized breast cancer trial, investigating the time to recurrence, we found evidence that the failure probabilities used at the design stage were too high. Since most of the methodological research involving sample size re-estimation has focused on normal or binary outcomes, we developed a method which preserves blinding to re-estimate sample size in our time to event trial. Purpose A mistakenly high estimate of the failure rate at the design stage may reduce the power unacceptably for a clinically important hazard ratio. We describe an ongoing trial and an application of a sample size re-estimation method that combines current trial data with prior trial data or assumes a parametric model to re-estimate failure probabilities in a blinded fashion. Methods Using our current blinded trial data and additional information from prior studies, we re-estimate the failure probabilities to be used in sample size re-calculation. We employ bootstrap resampling to quantify uncertainty in the re-estimated sample sizes. Results At the time of re-estimation data from 278 patients was available, averaging 1.2 years of follow up. Using either method, we estimated an increase of 0 for the hazard ratio proposed at the design stage. We show that our method of blinded sample size re-estimation preserves the Type I error rate. We show that when the initial guess of the failure probabilities are correct; the median increase in sample size is zero. Limitations Either some prior knowledge of an appropriate survival distribution shape or prior data is needed for re-estimation. Conclusions In trials when the accrual period is lengthy, blinded sample size re-estimation near the end of the planned accrual period should be considered. In our examples, when assumptions about failure probabilities and HRs are correct the methods usually do not increase sample size or otherwise increase it by very little. PMID:20392786

  14. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  15. Public Opinion Polls, Chicken Soup and Sample Size

    Science.gov (United States)

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  16. 7 CFR 201.43 - Size of sample.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Size of sample. 201.43 Section 201.43 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.43 Size of sample. The following are minimum sizes..., ryegrass, bromegrass, millet, flax, rape, or seeds of similar size. (c) One pound (454 grams) of...

  17. 7 CFR 52.803 - Sample unit size.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample unit size. 52.803 Section 52.803 Agriculture... United States Standards for Grades of Frozen Red Tart Pitted Cherries Sample Unit Size § 52.803 Sample unit size. Compliance with requirements for size and the various quality factors is based on...

  18. 7 CFR 52.3757 - Standard sample unit size.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Standard sample unit size. 52.3757 Section 52.3757..., Types, Styles, and Grades § 52.3757 Standard sample unit size. Compliance with requirements for the various quality factors except “size designation” is based on the following standard sample unit size...

  19. RNAseqPS: A Web Tool for Estimating Sample Size and Power for RNAseq Experiment

    OpenAIRE

    Yan Guo; Shilin Zhao; Chung-I Li; Quanhu Sheng; Yu Shyr

    2014-01-01

    Sample size and power determination is the first step in the experimental design of a successful study. Sample size and power calculation is required for applications for National Institutes of Health (NIH) funding. Sample size and power calculation is well established for traditional biological studies such as mouse model, genome wide association study (GWAS), and microarray studies. Recent developments in high-throughput sequencing technology have allowed RNAseq to replace microarray as the...

  20. Limit theorems for extremes with random sample size

    OpenAIRE

    Silvestrov, Dmitrii S.; Teugels, Jozef L.

    1998-01-01

    This paper is devoted to the investigation of limit theorems for extremes with random sample size under general dependence-independence conditions for samples and random sample size indexes. Limit theorems of weak convergence type are obtained as well as functional limit theorems for extremal processes with random sample size indexes.

  1. Sample size for detecting differentially expressed genes in microarray experiments

    Directory of Open Access Journals (Sweden)

    Li Jiangning

    2004-11-01

    Full Text Available Abstract Background Microarray experiments are often performed with a small number of biological replicates, resulting in low statistical power for detecting differentially expressed genes and concomitant high false positive rates. While increasing sample size can increase statistical power and decrease error rates, with too many samples, valuable resources are not used efficiently. The issue of how many replicates are required in a typical experimental system needs to be addressed. Of particular interest is the difference in required sample sizes for similar experiments in inbred vs. outbred populations (e.g. mouse and rat vs. human. Results We hypothesize that if all other factors (assay protocol, microarray platform, data pre-processing were equal, fewer individuals would be needed for the same statistical power using inbred animals as opposed to unrelated human subjects, as genetic effects on gene expression will be removed in the inbred populations. We apply the same normalization algorithm and estimate the variance of gene expression for a variety of cDNA data sets (humans, inbred mice and rats comparing two conditions. Using one sample, paired sample or two independent sample t-tests, we calculate the sample sizes required to detect a 1.5-, 2-, and 4-fold changes in expression level as a function of false positive rate, power and percentage of genes that have a standard deviation below a given percentile. Conclusions Factors that affect power and sample size calculations include variability of the population, the desired detectable differences, the power to detect the differences, and an acceptable error rate. In addition, experimental design, technical variability and data pre-processing play a role in the power of the statistical tests in microarrays. We show that the number of samples required for detecting a 2-fold change with 90% probability and a p-value of 0.01 in humans is much larger than the number of samples commonly used in

  2. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  3. Hand calculations for transport of radioactive aerosols through sampling systems.

    Science.gov (United States)

    Hogue, Mark; Thompson, Martha; Farfan, Eduardo; Hadlock, Dennis

    2014-05-01

    Workplace air monitoring programs for sampling radioactive aerosols in nuclear facilities sometimes must rely on sampling systems to move the air to a sample filter in a safe and convenient location. These systems may consist of probes, straight tubing, bends, contractions and other components. Evaluation of these systems for potential loss of radioactive aerosols is important because significant losses can occur. However, it can be very difficult to find fully described equations to model a system manually for a single particle size and even more difficult to evaluate total system efficiency for a polydispersed particle distribution. Some software methods are available, but they may not be directly applicable to the components being evaluated and they may not be completely documented or validated per current software quality assurance requirements. This paper offers a method to model radioactive aerosol transport in sampling systems that is transparent and easily updated with the most applicable models. Calculations are shown with the R Programming Language, but the method is adaptable to other scripting languages. The method has the advantage of transparency and easy verifiability. This paper shows how a set of equations from published aerosol science models may be applied to aspiration and transport efficiency of aerosols in common air sampling system components. An example application using R calculation scripts is demonstrated. The R scripts are provided as electronic attachments. PMID:24667389

  4. Sample size for estimating the ratio of two means

    OpenAIRE

    Galeone, A; Pollastri, A

    2009-01-01

    The decision about the sample size is particularly difficult when we have to estimate the ratio of two means. This abstract presents a procedure for the sample size determination in the considered situation

  5. Effect size estimates: current use, calculations, and interpretation.

    Science.gov (United States)

    Fritz, Catherine O; Morris, Peter E; Richler, Jennifer J

    2012-02-01

    The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis. PMID:21823805

  6. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz;

    2007-01-01

    that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological......In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires that the...... models presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small...

  7. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the... population; and (b) Sample size shall be determined using one of the following options: (1) Option...

  8. CALCULATION OF PARTICLE SIZE OF TITANIUM DIOXIDE HYDROSOL

    Directory of Open Access Journals (Sweden)

    L. M. Sliapniova

    2014-01-01

    Full Text Available One of the problems facing chemists who are involved in obtaining disperse systems with micro- and nanoscale particles of the disperse phase is a size evaluation of the obtained particles. Formation of hydrated sol is one of the stages for obtaining nanopowders while using sol-gel-method. We have obtained titanium dioxide hydrosol while using titanium tetrachloride hydrolysis in the presence of organic solvent with the purpose to get titanium dioxide powder It has been necessary to evaluate size of titanium dioxide hydrosol particles because particle dimensions of disperse hydrosol phase are directly interrelated with the obtained powder dispersiveness.Size calculation of titanium dioxide hydrosol particles of disperse phase has been executed in accordance with the Rayleigh equation and it has been shown that calculation results correspond to experimental data of atomic force microscopy and X-ray crystal analysis of the powder obtained from hydrosol.In order to calculate particle size in the disperse system it is possible to use the Rayleigh equation if the particle size is not more than 1/10 of wave length of impinging light or the Heller equation for the system including particles with diameter less than wave length of the impinging light but which is more than 1/10 of its value. Titaniun dioxide hydrosol has been obtained and an index of the wave ration has been calculated in the Heller equation. The obtained value has testified about high dispersiveness of the system and possibility to use the Rayleigh equation for calculation of the particle size in the disperse phase. Calculation of disperse-phase particle size of titanium dioxide hydrosol has corresponded to experimental data of the atomic force microscopy and X-ray crystal analysis for the powder obtained from the system.

  9. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    Science.gov (United States)

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  10. Strategies for Field Sampling When Large Sample Sizes are Required

    Science.gov (United States)

    Estimates of prevalence or incidence of infection with a pathogen endemic in a fish population can be valuable information for development and evaluation of aquatic animal health management strategies. However, hundreds of unbiased samples may be required in order to accurately estimate these parame...

  11. Optimal flexible sample size design with robust power.

    Science.gov (United States)

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385

  12. Sample size and power for comparing two or more treatment groups in clinical trials.

    OpenAIRE

    Day, S. J.; Graham, D F

    1989-01-01

    Methods for determining sample size and power when comparing two groups in clinical trials are widely available. Studies comparing three or more treatments are not uncommon but are more difficult to analyse. A linear nomogram was devised to help calculate the sample size required when comparing up to five parallel groups. It may also be used retrospectively to determine the power of a study of given sample size. In two worked examples the nomogram was efficient. Although the nomogram offers o...

  13. On bootstrap sample size in extreme value theory

    NARCIS (Netherlands)

    J.L. Geluk (Jaap); L.F.M. de Haan (Laurens)

    2002-01-01

    textabstractIt has been known for a long time that for bootstrapping the probability distribution of the maximum of a sample consistently, the bootstrap sample size needs to be of smaller order than the original sample size. See Jun Shao and Dongsheng Tu (1995), Ex. 3.9,p. 123. We show that the same

  14. 7 CFR 51.2341 - Sample size for grade determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample size for grade determination. 51.2341 Section..., AND STANDARDS) United States Standards for Grades of Kiwifruit § 51.2341 Sample size for grade determination. For fruit place-packed in tray pack containers, the sample shall consist of the contents of...

  15. Neutron batch size optimisation methodology for Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Highlights: • A method is suggested for improving efficiency of MC criticality calculations. • The method optimises the number of neutrons simulated per cycle. • The optimal number of neutrons per cycle depends on allocated computing time. - Abstract: We present a methodology that improves the efficiency of conventional power iteration based Monte Carlo criticality calculations by optimising the number of neutron histories simulated per criticality cycle (the so-called neutron batch size). The chosen neutron batch size affects both the rate of convergence (in computing time) and magnitude of bias in the fission source. Setting a small neutron batch size ensures a rapid simulation of criticality cycles, allowing the fission source to converge fast to its stationary state; however, at the same time, the small neutron batch size introduces a large systematic bias in the fission source. It follows that for a given allocated computing time, there is an optimal neutron batch size that balances these two effects. We approach this problem by studying the error in the cumulative fission source, i.e. the fission source combined over all simulated cycles, as all results are commonly combined over the simulated cycles. We have deduced a simplified formula for the error in the cumulative fission source, taking into account the neutron batch size, the dominance ratio of the system, the error in the initial fission source and the allocated computing time (in the form of the total number of simulated neutron histories). Knowing how the neutron batch size affects the error in the cumulative fission source allows us to find its optimal value. We demonstrate the benefits of the method on a number of numerical test calculations

  16. Parasite prevalence and sample size: misconceptions and solutions

    OpenAIRE

    Jovani, Roger; Tella, José Luis

    2006-01-01

    Parasite prevalence (the proportion of infected hosts) is a common measure used to describe parasitaemias and to unravel ecological and evolutionary factors that influence host–parasite relationships. Prevalence esti- mates are often based on small sample sizes because of either low abundance of the hosts or logistical problems associated with their capture or laboratory analysis. Because the accuracy of prevalence estimates is lower with small sample sizes, addressing sample size h...

  17. A computer program for sample size computations for banding studies

    Science.gov (United States)

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  18. Estimating optimal sampling unit sizes for satellite surveys

    Science.gov (United States)

    Hallum, C. R.; Perry, C. R., Jr.

    1984-01-01

    This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.

  19. A review of software for sample size determination.

    Science.gov (United States)

    Dattalo, Patrick

    2009-09-01

    The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities. PMID:19696082

  20. 40 CFR 761.286 - Sample size and procedure for collecting a sample.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample size and procedure for collecting a sample. 761.286 Section 761.286 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...) § 761.286 Sample size and procedure for collecting a sample. At each selected sampling location for...

  1. Sample Sizes when Using Multiple Linear Regression for Prediction

    Science.gov (United States)

    Knofczynski, Gregory T.; Mundfrom, Daniel

    2008-01-01

    When using multiple regression for prediction purposes, the issue of minimum required sample size often needs to be addressed. Using a Monte Carlo simulation, models with varying numbers of independent variables were examined and minimum sample sizes were determined for multiple scenarios at each number of independent variables. The scenarios…

  2. Minimum Sample Size Recommendations for Conducting Factor Analyses

    Science.gov (United States)

    Mundfrom, Daniel J.; Shaw, Dale G.; Ke, Tian Lu

    2005-01-01

    There is no shortage of recommendations regarding the appropriate sample size to use when conducting a factor analysis. Suggested minimums for sample size include from 3 to 20 times the number of variables and absolute ranges from 100 to over 1,000. For the most part, there is little empirical evidence to support these recommendations. This…

  3. 7 CFR 52.775 - Sample unit size.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample unit size. 52.775 Section 52.775 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... United States Standards for Grades of Canned Red Tart Pitted Cherries 1 Sample Unit Size § 52.775...

  4. Testing samples size effect on notch toughness of structural steels

    Directory of Open Access Journals (Sweden)

    B. Strnadel

    2009-10-01

    Full Text Available In the paper notch toughness assessment of full scale testing samples (FS form the upper bound toughness of sub-sized (SS samples of structural carbon-manganese steels. The relations proposed by Schindler (2000 are in good agreement with experimental data. Empirical proportionality constant q* = 0,54 between notch toughness of full scale and sub-sized samples of studied structural steels agrees well with theoretically estimated constant q* = 0,50–0,54. More precise knowledge of the size effect of testing samples on temperature dependence of notch toughness requires an analysis of scatter in experimental data.

  5. The study on differentiated particle size sampling technology of aerosols

    International Nuclear Information System (INIS)

    This article introduces basic principle of differentiated particle size sampling technology of aerosols. This sampling technology is used to conduct a experimental research on the aerosols particles size distribution of uranium and radon and it's daughters. Experimental results showed that the part of radon and it's daughters aerosols particles size smaller than 0.43 μm reached 76.4%. The part of radon and it's daughters aerosols particles size less than 1 μm reached 96.3%. The part of uranium aerosol particles size larger than 4.7 μm under specific conditions is 94%, the part of aerosol particles size larger than 10 μm is 72%. According to the experiment's result, we designed a new sampling equipments that cutting size is 1 μm to collect samples of aerosols, and it is used in the separation efficiency experiments of 241Am aerosols. Experimental results showed that the separation efficiency of 241Am aerosols can reach 94.2%. Thus, using the differentiated particle size sampling technology to collect samples of plutonium aerosols, in the sampling process can reduce the effect of natural background aerosols. (authors)

  6. Sample size in qualitative interview studies: guided by information power

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the...... concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...

  7. Comparison of Bayesian Sample Size Criteria: ACC, ALC, and WOC

    OpenAIRE

    Cao, Jing; Lee, J. Jack; Alber, Susan

    2009-01-01

    A challenge for implementing performance based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the des...

  8. The Sample Size Needed for the Trimmed "t" Test when One Group Size Is Fixed

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2009-01-01

    The sample size determination is an important issue for planning research. However, limitations in size have seldom been discussed in the literature. Thus, how to allocate participants into different treatment groups to achieve the desired power is a practical issue that still needs to be addressed when one group size is fixed. The authors focused…

  9. Sample Size Determination: A Comparison of Attribute, Continuous Variable, and Cell Size Methods.

    Science.gov (United States)

    Clark, Philip M.

    1984-01-01

    Describes three methods of sample size determination, each having its use in investigation of social science problems: Attribute method; Continuous Variable method; Galtung's Cell Size method. Statistical generalization, benefits of cell size method (ease of use, trivariate analysis and trichotyomized variables), and choice of method are…

  10. An adaptive sampling scheme for deep-penetration calculation

    International Nuclear Information System (INIS)

    As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree

  11. Optimal and maximin sample sizes for multicentre cost-effectiveness trials.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2015-10-01

    This paper deals with the optimal sample sizes for a multicentre trial in which the cost-effectiveness of two treatments in terms of net monetary benefit is studied. A bivariate random-effects model, with the treatment-by-centre interaction effect being random and the main effect of centres fixed or random, is assumed to describe both costs and effects. The optimal sample sizes concern the number of centres and the number of individuals per centre in each of the treatment conditions. These numbers maximize the efficiency or power for given research costs or minimize the research costs at a desired level of efficiency or power. Information on model parameters and sampling costs are required to calculate these optimal sample sizes. In case of limited information on relevant model parameters, sample size formulas are derived for so-called maximin sample sizes which guarantee a power level at the lowest study costs. Four different maximin sample sizes are derived based on the signs of the lower bounds of two model parameters, with one case being worst compared to others. We numerically evaluate the efficiency of the worst case instead of using others. Finally, an expression is derived for calculating optimal and maximin sample sizes that yield sufficient power to test the cost-effectiveness of two treatments. PMID:25656551

  12. THE INFLUENCE OF SAMPLE SIZE AND SELECTION OF FINANCIAL RATIOS IN BANKRUPTCY MODEL ACCURACY

    OpenAIRE

    Yusuf Ali Al-Hroot

    2015-01-01

    This paper aims to clarify the influence of changing both the sample size and selection of financial ratios in bankruptcy models accuracy of companies listed in the industrial sector of Jordan. The study sample is divided into three sub-samples counting 6, 10 and 14 companies respectively; each sample is composed of bankrupt companies and the solvent ones during the period from 2000 to 2013. Financial ratios were calculated and categorized into two groups. The first group includes: liquidity,...

  13. SNS Sample Activation Calculator Flux Recommendations and Validation

    Energy Technology Data Exchange (ETDEWEB)

    McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)

    2015-02-01

    The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.

  14. Size variation in samples of fossil and recent murid teeth

    NARCIS (Netherlands)

    Freudenthal, M.; Martín Suárez, E.

    1990-01-01

    The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.

  15. A sample size planning approach that considers both statistical significance and clinical significance

    OpenAIRE

    Jia, Bin; Lynn, Henry S

    2015-01-01

    Background The CONSORT statement requires clinical trials to report confidence intervals, which help to assess the precision and clinical importance of the treatment effect. Conventional sample size calculations for clinical trials, however, only consider issues of statistical significance (that is, significance level and power). Method A more consistent approach is proposed whereby sample size planning also incorporates information on clinical significance as indicated by the boundaries of t...

  16. EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION

    Directory of Open Access Journals (Sweden)

    André Carlos Silva

    2012-12-01

    Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.

  17. Two-stage chain sampling inspection plans with different sample sizes in the two stages

    Science.gov (United States)

    Stephens, K. S.; Dodge, H. F.

    1976-01-01

    A further generalization of the family of 'two-stage' chain sampling inspection plans is developed - viz, the use of different sample sizes in the two stages. Evaluation of the operating characteristics is accomplished by the Markov chain approach of the earlier work, modified to account for the different sample sizes. Markov chains for a number of plans are illustrated and several algebraic solutions are developed. Since these plans involve a variable amount of sampling, an evaluation of the average sampling number (ASN) is developed. A number of OC curves and ASN curves are presented. Some comparisons with plans having only one sample size are presented and indicate that improved discrimination is achieved by the two-sample-size plans.

  18. Aircraft studies of size-dependent aerosol sampling through inlets

    Science.gov (United States)

    Porter, J. N.; Clarke, A. D.; Ferry, G.; Pueschel, R. F.

    1992-01-01

    Representative measurement of aerosol from aircraft-aspirated systems requires special efforts in order to maintain near isokinetic sampling conditions, estimate aerosol losses in the sample system, and obtain a measurement of sufficient duration to be statistically significant for all sizes of interest. This last point is especially critical for aircraft measurements which typically require fast response times while sampling in clean remote regions. This paper presents size-resolved tests, intercomparisons, and analysis of aerosol inlet performance as determined by a custom laser optical particle counter. Measurements discussed here took place during the Global Backscatter Experiment (1988-1989) and the Central Pacific Atmospheric Chemistry Experiment (1988). System configurations are discussed including (1) nozzle design and performance, (2) system transmission efficiency, (3) nonadiabatic effects in the sample line and its effect on the sample-line relative humidity, and (4) the use and calibration of a virtual impactor.

  19. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  20. Sample size considerations for livestock movement network data.

    Science.gov (United States)

    Pfeiffer, Caitlin N; Firestone, Simon M; Campbell, Angus J D; Larsen, John W A; Stevenson, Mark A

    2015-12-01

    The movement of animals between farms contributes to infectious disease spread in production animal populations, and is increasingly investigated with social network analysis methods. Tangible outcomes of this work include the identification of high-risk premises for targeting surveillance or control programs. However, knowledge of the effect of sampling or incomplete network enumeration on these studies is limited. In this study, a simulation algorithm is presented that provides an estimate of required sampling proportions based on predicted network size, density and degree value distribution. The algorithm may be applied a priori to ensure network analyses based on sampled or incomplete data provide population estimates of known precision. Results demonstrate that, for network degree measures, sample size requirements vary with sampling method. The repeatability of the algorithm output under constant network and sampling criteria was found to be consistent for networks with at least 1000 nodes (in this case, farms). Where simulated networks can be constructed to closely mimic the true network in a target population, this algorithm provides a straightforward approach to determining sample size under a given sampling procedure for a network measure of interest. It can be used to tailor study designs of known precision, for investigating specific livestock movement networks and their impact on disease dissemination within populations. PMID:26276397

  1. 40 CFR 600.208-77 - Sample calculation.

    Science.gov (United States)

    2010-07-01

    ... 600.208-77 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.208-77 Sample...

  2. On an Approach to Bayesian Sample Sizing in Clinical Trials

    CERN Document Server

    Muirhead, Robb J

    2012-01-01

    This paper explores an approach to Bayesian sample size determination in clinical trials. The approach falls into the category of what is often called "proper Bayesian", in that it does not mix frequentist concepts with Bayesian ones. A criterion for a "successful trial" is defined in terms of a posterior probability, its probability is assessed using the marginal distribution of the data, and this probability forms the basis for choosing sample sizes. We illustrate with a standard problem in clinical trials, that of establishing superiority of a new drug over a control.

  3. Approximate sample sizes required to estimate length distributions

    Science.gov (United States)

    Miranda, L.E.

    2007-01-01

    The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.

  4. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO2-powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  5. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    Science.gov (United States)

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-11-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation.

  6. Sample Size Bias in Judgments of Perceptual Averages

    Science.gov (United States)

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  7. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    Science.gov (United States)

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  8. CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc

    International Nuclear Information System (INIS)

    CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor Pcel in high-energy photon and electron beams. Current dosimetry protocols base the value of Pcel on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on Pcel, much lower than those previously published. The current values of Pcel compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol

  9. Surprise Calculator: Estimating relative entropy and Surprise between samples

    Science.gov (United States)

    Seehars, Sebastian

    2016-05-01

    The Surprise is a measure for consistency between posterior distributions and operates in parameter space. It can be used to analyze either the compatibility of separately analyzed posteriors from two datasets, or the posteriors from a Bayesian update. The Surprise Calculator estimates relative entropy and Surprise between two samples, assuming they are Gaussian. The software requires the R package CompQuadForm to estimate the significance of the Surprise, and rpy2 to interface R with Python.

  10. Effects of sample size on KERNEL home range estimates

    Science.gov (United States)

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  11. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  12. An upgraded version of an importance sampling algorithm for large scale shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bianco, D; Andreozzi, F; Lo Iudice, N; Porrino, A [Universita di Napoli Federico II, Dipartimento Scienze Fisiche, Monte S. Angelo, via Cintia, 80126 Napoli (Italy); S, Dimitrova, E-mail: loiudice@na.infn.i [Institute of Nuclear Research and Nuclear Energy, Sofia (Bulgaria)

    2010-01-01

    An importance sampling iterative algorithm, developed few years ago, for generating exact eigensolutions of large matrices is upgraded so as to allow large scale shell model calculations in the uncoupled m-scheme. By exploiting the sparsity properties of the Hamiltonian matrix and projecting out effectively the good angular momentum, the new importance sampling allows to reduce drastically the sizes of the matrices while keeping full control of the accuracy of the eigensolutions. Illustrative numerical examples are presented.

  13. MCNPX calculation of the reactivity worth of actinides OSMOSE samples

    International Nuclear Information System (INIS)

    Highlights: • Improvement the neutronic predictions through reactivity worth calculations. • MCNPx code with the nuclear data library ENDF/B-VII has been used for calculations. • The results show a good agreement within a relative error less than ±8.2%. - Abstract: Improving neutronic prediction is a very important step in designing advanced reactors and reactor fuel. There are three main critical reactor facilities of the CEA Cadarache. These reactor facilities are EOLE, MINERVE and MASURCA. The MINERVE reactor is used within the frame work of what is known as OSMOSE project. The OSMOSE program aims at improving neutronic predictions of advanced nuclear fuels through measurements in the MINERVE reactor on samples containing separated actinides. In the present work, the reactivity worth of OSMOSE samples have been calculated using the most recent Monte Carlo N Transport code MCNPX using the most recent nuclear cross-section data library ENDF/B-VII. The calculations are applied to the three core configurations R1-UO2, R2-UO2 and R1-MOX. The present work is performed to improve and/or measure the degree of validity of the previous obtained results of the REBUS and DRAGON codes. Also, our study is extended to include the experimental results for the sake of comparison. The comparison between the previously calculated values using DRAGON and the present results for the effective multiplication factor keff has a deviation less than ±0.3% for the three core configurations. Furthermore, the results of the reactivity worth of the present work show a good agreement with the experimental results within a relative error less than ±8.2%

  14. Sample Size in Clinical Cardioprotection Trials Using Myocardial Salvage Index, Infarct Size, or Biochemical Markers as Endpoint

    DEFF Research Database (Denmark)

    Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan

    2016-01-01

    biochemical markers in clinical cardioprotection trials and how scan day affect sample size. METHODS AND RESULTS: Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB......) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using...... MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid...

  15. PLOT SIZE AND APPROPRIATE SAMPLE SIZE TO STUDY NATURAL REGENERATION IN AMAZONIAN FLOODPLAIN FOREST

    Directory of Open Access Journals (Sweden)

    João Ricardo Vasconcellos Gama

    2001-01-01

    Full Text Available ABSTRACT: The aim of this study was to determine the optimum plot size as well as the appropriate sample size in order to provide an accurate sampling of natural regeneration surveys in high floodplain forests, low floodplain forests and in floodplain forests without stratification in the Amazonian estuary. Data were obtained at Exportadora de Madeira do Pará Ltda. – EMAPA forestlands, located in Afuá County, State of Pará. Based on the results, the following plot sizes were recommended: 70m2 - SC1 (0,3m ≤ h < 1,5m, 80m2 - SC2 (h ≥ 1,50m to DAP < 5,0cm, 90m2 - SC3 (5,0cm ≤ DAP < 15,0 cm and 70m2 – ASP (h ≥ 0,3m to DAP < 15,0cm. Considering these optimumplot sizes, it is possible to obtain a representative sampling of the floristic composition when using 19sub-plots in high floodplain, 14 sub-plots in low floodplain, and 19 sub-plots in the forest without stratification to survey the species of SC1 and the species of all sampled population (ASP, while 39 sub-plots are needed for sampling the natural regeneration species in SC2 and SC3.

  16. Propagation of uncertainty in system parameters of a LWR model by sampling MCNPX calculations - Burnup analysis

    International Nuclear Information System (INIS)

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately

  17. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  18. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  19. The Effects of Sample Size on Expected Value, Variance and Fraser Efficiency for Nonparametric Independent Two Sample Tests

    Directory of Open Access Journals (Sweden)

    Ismet DOGAN

    2015-10-01

    Full Text Available Objective: Choosing the most efficient statistical test is one of the essential problems of statistics. Asymptotic relative efficiency is a notion which enables to implement in large samples the quantitative comparison of two different tests used for testing of the same statistical hypothesis. The notion of the asymptotic efficiency of tests is more complicated than that of asymptotic efficiency of estimates. This paper discusses the effect of sample size on expected values and variances of non-parametric tests for independent two samples and determines the most effective test for different sample sizes using Fraser efficiency value. Material and Methods: Since calculating the power value in comparison of the tests is not practical most of the time, using the asymptotic relative efficiency value is favorable. Asymptotic relative efficiency is an indispensable technique for comparing and ordering statistical test in large samples. It is especially useful in nonparametric statistics where there exist numerous heuristic tests such as the linear rank tests. In this study, the sample size is determined as 2 ≤ n ≤ 50. Results: In both balanced and unbalanced cases, it is found that, as the sample size increases expected values and variances of all the tests discussed in this paper increase as well. Additionally, considering the Fraser efficiency, Mann-Whitney U test is found as the most efficient test among the non-parametric tests that are used in comparison of independent two samples regardless of their sizes. Conclusion: According to Fraser efficiency, Mann-Whitney U test is found as the most efficient test.

  20. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  1. Study for particulate sampling, sizing and analysis for composition

    Energy Technology Data Exchange (ETDEWEB)

    King, A.M.; Jones, A.M. [IMC Technical Services Ltd., Burton-on-Trent (United Kingdom); Dorling, S.R. [University of East Angila (United Kingdom); Merefield, J.R.; Stone, I.M. [Exeter Univ. (United Kingdom); Hall, K.; Garner, G.V.; Hall, P.A. [Hall Analytical Labs., Ltd. (United Kingdom); Stokes, B. [CRE Group Ltd. (United Kingdom)

    1999-07-01

    This report summarises the findings of a study investigating the origin of particulate matter by analysis of the size distribution and composition of particulates in rural, semi-rural and urban areas of the UK. Details are given of the sampling locations; the sampling; monitoring, and inorganic and organic analyses; the review of archive material. The analysis carried out at St Margaret's/Stoke Ferry, comparisons of data with other locations, and the composition of ambient airborne matter are discussed, and recommendations are given. Results of PM2.5/PM10 samples collected at St Margaret's and Stoke Ferry in 1998, and back trajectories for five sites are considered in appendices.

  2. MetSizeR: selecting the optimal sample size for metabolomic studies using an analysis based approach

    OpenAIRE

    Nyamundanda, Gift; Gormley, Isobel Claire; Fan, Yue; Gallagher, William M; Brennan, Lorraine

    2013-01-01

    Background: Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. Results: In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experime...

  3. Multivariate methods and small sample size: combining with small effect size

    OpenAIRE

    Budaev, Dr. Sergey V.

    2010-01-01

    This manuscript is the author's response to: "Dochtermann, N.A. & Jenkins, S.H. Multivariate methods and small sample sizes, Ethology, 117, 95-101." and accompanies this paper: "Budaev, S. Using principal components and factor analysis in animal behaviour research: Caveats and guidelines. Ethology, 116, 472-480"

  4. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  5. It's in the Sample: The Effects of Sample Size and Sample Diversity on the Breadth of Inductive Generalization

    Science.gov (United States)

    Lawson, Chris A.; Fisher, Anna V.

    2011-01-01

    Developmental studies have provided mixed evidence with regard to the question of whether children consider sample size and sample diversity in their inductive generalizations. Results from four experiments with 105 undergraduates, 105 school-age children (M = 7.2 years), and 105 preschoolers (M = 4.9 years) showed that preschoolers made a higher…

  6. Análise do emprego do cálculo amostral e do erro do método em pesquisas científicas publicadas na literatura ortodôntica nacional e internacional Analysis of the use of sample size calculation and error of method in researches published in Brazilian and international orthodontic journals

    Directory of Open Access Journals (Sweden)

    David Normando

    2011-12-01

    Full Text Available INTRODUÇÃO: o dimensionamento adequado da amostra estudada e a análise apropriada do erro do método são passos importantes na validação dos dados obtidos em determinado estudo científico, além das questões éticas e econômicas. OBJETIVO: esta investigação tem o objetivo de avaliar, quantitativamente, com que frequência os pesquisadores da ciência ortodôntica têm empregado o cálculo amostral e a análise do erro do método em pesquisas publicadas no Brasil e nos Estados Unidos. MÉTODOS: dois importantes periódicos, de acordo com a Capes (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, foram analisados, a Revista Dental Press de Ortodontia e Ortopedia Facial (Dental Press e o American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO. Apenas artigos publicados entre os anos de 2005 e 2008 foram analisados. RESULTADOS: a maioria das pesquisas publicadas em ambas as revistas emprega alguma forma de análise do erro do método, quando essa metodologia pode ser aplicada. Porém, apenas um número muito pequeno dos artigos publicados nesses periódicos apresenta qualquer descrição de como foram dimensionadas as amostras estudadas. Essa proporção, já pequena (21,1% na revista editada nos Estados Unidos (AJO-DO, é significativamente menor (p=0,008 na revista editada no Brasil (Dental Press (3,9%. CONCLUSÃO: os pesquisadores e o corpo editorial, de ambas as revistas, deveriam dedicar uma maior atenção ao exame dos erros inerentes à ausência de tais análises na pesquisa científica, em especial aos erros inerentes a um dimensionamento inadequado das amostras.INTRODUCTION: Reliable sample size and an appropriate analysis of error are important steps to validate the data obtained in a scientific study, in addition to the ethical and economic issues. OBJECTIVE: To evaluate, quantitatively, how often the researchers of orthodontic science have used the calculation of sample size and evaluated the

  7. Load calculations of radiant cooling systems for sizing the plant

    DEFF Research Database (Denmark)

    Bourdakis, Eleftherios; Kazanci, Ongun Berk; Olesen, Bjarne W.

    2015-01-01

    The aim of this study was, by using a building simulation software, to prove that a radiant cooling system should not be sized based on the maximum cooling load but at a lower value. For that reason six radiant cooling models were simulated with two control principles using 100%, 70% and 50% of t...

  8. Ultrasonic attenuation model for measuring particle size and inverse calculation of particle size distribution in mineral slurries

    Institute of Scientific and Technical Information of China (English)

    HE Gui-chun; NI Wen

    2006-01-01

    Based on various ultrasonic loss mechanisms, the formula of the cumulative mass percentage of minerals with different particle sizes was given, with which the particle size distribution was integrated into an ultrasonic attenuation model. And then the correlations between the ultrasonic attenuation and the pulp density, and the particle size were obtained. The derived model was combined with the experiment and the analysis of experimental data to determine the inverse model relating ultrasonic attenuation coefficient with size distribution. Finally, an optimization method of inverse parameter, genetic algorithm was applied for particle size distribution. The results of inverse calculation show that the precision of measurement was high.

  9. Sample size for estimating average trunk diameter and plant height in eucalyptus hybrids

    Directory of Open Access Journals (Sweden)

    Alberto Cargnelutti Filho

    2016-01-01

    Full Text Available ABSTRACT: In eucalyptus crops, it is important to determine the number of plants that need to be evaluated for a reliable inference of growth. The aim of this study was to determine the sample size needed to estimate average trunk diameter at breast height and plant height of inter-specific eucalyptus hybrids. In 6,694 plants of twelve inter-specific hybrids it was evaluated trunk diameter at breast height at three (DBH3 and seven years (DBH7 and tree height at seven years (H7 of age. The statistics: minimum, maximum, mean, variance, standard deviation, standard error, and coefficient of variation were calculated. The hypothesis of variance homogeneity was tested. The sample size was determined by re sampling with replacement of 10,000 re samples. There was an increase in the sample size from DBH3 to H7 and DBH7. A sample size of 16, 59 and 31 plants is adequate to estimate DBH3, DBH7 and H7 means, respectively, of inter-specific hybrids of eucalyptus, with amplitude of confidence interval of 95% equal to 20% of the estimated mean.

  10. Nonparametric Sample Size Estimation for Sensitivity and Specificity with Multiple Observations per Subject

    OpenAIRE

    Hu, Fan; Schucany, William R.; Ahn, Chul

    2010-01-01

    We propose a sample size calculation approach for the estimation of sensitivity and specificity of diagnostic tests with multiple observations per subjects. Many diagnostic tests such as diagnostic imaging or periodontal tests are characterized by the presence of multiple observations for each subject. The number of observations frequently varies among subjects in diagnostic imaging experiments or periodontal studies. Nonparametric statistical methods for the analysis of clustered binary data...

  11. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    Science.gov (United States)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by

  12. Automated sampling assessment for molecular simulations using the effective sample size

    CERN Document Server

    Zhang, Xin; Zuckerman, Daniel M

    2010-01-01

    To quantify the progress in development of algorithms and forcefields used in molecular simulations, a method for the assessment of the sampling quality is needed. We propose a general method to assess the sampling quality through the estimation of the number of independent samples obtained from molecular simulations. This method is applicable to both dynamic and nondynamic methods and utilizes the variance in the populations of physical states to determine the ESS. We test the correctness and robustness of our procedure in a variety of systems--two-state toy model, all-atom butane, coarse-grained calmodulin, all-atom dileucine and Met-enkaphalin. We also introduce an automated procedure to obtain approximate physical states from dynamic trajectories: this procedure allows for sample--size estimation for systems for which physical states are not known in advance.

  13. 7 CFR 51.2838 - Samples for grade and size determination.

    Science.gov (United States)

    2010-01-01

    ... or Jumbo size or larger the package shall be the sample. When individual packages contain less than... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.2838... Creole Types) Samples for Grade and Size Determination § 51.2838 Samples for grade and size...

  14. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  15. GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.

    2013-11-12

    This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.

  16. Expected Sample Size Savings from Curtailed Procedures for the $t$-Test and Hotelling's $T^2$

    OpenAIRE

    Herrmann, Nira; Szatrowski, Ted H.

    1980-01-01

    Brown, Cohen and Strawderman propose curtailed procedures for the $t$-test and Hotelling's $T^2$. In this paper we present the exact forms of these procedures and examine the expected sample size savings under the null hypothesis. The sample size savings can be bounded by a constant which is independent of the sample size. Tables are given for the expected sample size savings and maximum sample size saving under the null hypothesis for a range of significance levels $(\\alpha)$, dimensions $(p...

  17. Irradiation induced dimensional changes in graphite: The influence of sample size

    International Nuclear Information System (INIS)

    Graphical abstract: Display Omitted Highlights: ► Dimensional changes in irradiated anisotropic polycrystalline GR-280 graphite. ► We propose the model of anisotropic domains changing their shape under irradiation. ► Disorientation of domain structure explains observed dimensional changes. ► Macro-graphite deformation is related to shape-changes in finite size samples. - Abstract: Dimensional changes in irradiated anisotropic polycrystalline GR-280 graphite samples as measured in the parallel and perpendicular directions of extrusion revealed a mismatch between volume changes measured directly and those calculated using the generally accepted methodology based on length change measurements only. To explain this observation a model is proposed based on polycrystalline substructural elements – domains. Those domains are anisotropic, have different amplitudes of shape-changes with respect to the sample as a whole and are randomly orientated relative to the sample axes of symmetry. This domain model can explain the mismatch observed in experimental data. It is shown that the disoriented domain structure leads to the development of irradiation-induced stresses and to the dependence of the dimensional changes on the sizes of graphite samples chosen for the irradiation experiment. The authors derive the relationship between shape-changes in the finite size samples and the actual shape-changes observable on the macro-scale in irradiated graphite.

  18. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David

    2013-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4-8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and type-I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of 8 fish could detect an increase of ∼ 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of ∼ 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2 this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of ∼ 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated by increased precision of composites for estimating mean

  19. Calculated Grain Size-Dependent Vacancy Supersaturation and its Effect on Void Formation

    DEFF Research Database (Denmark)

    Singh, Bachu Narain; Foreman, A. J. E.

    1974-01-01

    In order to study the effect of grain size on void formation during high-energy electron irradiations, the steady-state point defect concentration and vacancy supersaturation profiles have been calculated for three-dimensional spherical grains up to three microns in size. In the calculations of...... vacancy supersaturation as a function of grain size, the effects of internal sink density and the dislocation preference for interstitial attraction have been included. The computations show that the level of vacancy supersaturation achieved in a grain decreases with decreasing grain size. The grain size...

  20. Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests

    Science.gov (United States)

    Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits

  1. 7 CFR 51.3200 - Samples for grade and size determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.3200... Grade and Size Determination § 51.3200 Samples for grade and size determination. Individual samples.... When individual packages contain 20 pounds or more and the onions are packed for Large or Jumbo size...

  2. 40 CFR 761.243 - Standard wipe sample method and size.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Standard wipe sample method and size... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural...

  3. Calculating and Reporting Effect Sizes to Facilitate Cumulative Science: A Practical Primer for t-tests and ANOVAs

    Directory of Open Access Journals (Sweden)

    DanielLakens

    2013-11-01

    Full Text Available Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA’s such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.

  4. Practical guidelines for assessing power and false discovery rate for a fixed sample size in microarray experiments

    OpenAIRE

    Tong, Tiejun; Zhao, Hongyu

    2008-01-01

    One major goal in microarray studies is to identify genes having different expression levels across different classes/conditions. In order to achieve this goal, a study needs to have an adequate sample size to ensure the desired power. Due to the importance of this topic, a number of approaches to sample size calculation have been developed. However, due to the cost and/or experimental difficulties in obtaining sufficient biological materials, it might be difficult to attain the required samp...

  5. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten;

    2009-01-01

    /CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study of...... metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...

  6. 7 CFR 51.690 - Sample for grade or size determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.690 Section..., California, and Arizona) Sample for Grade Or Size Determination § 51.690 Sample for grade or size determination. Each sample shall consist of 50 oranges. When individual packages contain at least 50...

  7. 7 CFR 51.1406 - Sample for grade or size determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.1406..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans....

  8. 7 CFR 51.629 - Sample for grade or size determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.629 Section..., California, and Arizona) Sample for Grade Or Size Determination § 51.629 Sample for grade or size determination. Each sample shall consist of 33 grapefruit. When individual packages contain at least...

  9. 7 CFR 51.1548 - Samples for grade and size determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.1548..., AND STANDARDS) United States Standards for Grades of Potatoes 1 Samples for Grade and Size Determination § 51.1548 Samples for grade and size determination. Individual samples shall consist of at...

  10. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  11. Alpha spectrometry for particle size determination of mineral sands dust samples

    International Nuclear Information System (INIS)

    A method is proposed for assessing the size distribution of the radioactive particles directly from the alpha spectrum of a dust sample. The residual range distribution of alpha particles emerging from a sphere containing a monoenergetic alpha emitter is simply a quadratic function of the diameter of the sphere. The residual range distribution from a typical dust particle closely approximates that of a sphere of the same mass. For mixtures of various size particles of similar density the (multiparticle) residual range distribution can thus readily be calculated for each of the alpha emitters contained in the particles. Measurement of the composite residual range distribution can be made in a vacuum alpha spectrometer provided the dust sample has no more than a monolayer of particles. The measured energy distribution is particularly sensitive to upper particle size distributions in the diameter region of 4μm to 20μm of 5 mg/cm3 density particles, i.e. 2 to 10 mg/ch2. For dust particles containing212Po or known ratios of alpha emitters a measured alpha spectrum can be unraveled to the underlying particle size distribution. Uncertainty in the size distribution has been listed as deserving research priority in the overall radiation protection program of the mineral sands industry. The proposed method had the potential of reducing this uncertainty, thus permitting more effective radiation protection control. 2 refs., 1 tabs., 1 figs

  12. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy

  13. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  14. 40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable...

  15. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the...

  16. Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  17. A Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Parrinello, Michele

    2015-03-01

    The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.

  18. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    Science.gov (United States)

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  19. A contemporary decennial global sample of changing agricultural field sizes

    Science.gov (United States)

    White, E.; Roy, D. P.

    2011-12-01

    In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.

  20. Calculation method for particle mean diameter and particle size distribution function under dependent model algorithm

    Institute of Scientific and Technical Information of China (English)

    Hong Tang; Xiaogang Sun; Guibin Yuan

    2007-01-01

    In total light scattering particle sizing technique, the relationship among Sauter mean diameter D32, mean extinction efficiency Q, and particle size distribution function is studied in order to inverse the mean diameter and particle size distribution simply. We propose a method which utilizes the mean extinction efficiency ratio at only two selected wavelengths to solve D32 and then to inverse the particle size distribution associated with (Q) and D32. Numerical simulation results show that the particle size distribution is inversed accurately with this method, and the number of wavelengths used is reduced to the greatest extent in the measurement range. The calculation method has the advantages of simplicity and rapidness.

  1. Uncertainty in Power Law Analysis: Influences of Sample Size, Measurement Error, and Analysis Methods

    Science.gov (United States)

    Hui, D.; Luo, Y.; Jackson, R. B.

    2005-12-01

    A power function, Y=Y0 Mbeta, can be used to describe the relationship of physiological variables with body size over a wide range of scales, typically many orders of magnitude. One of the key issues in the renewed power law debate is whether the allometric scaling exponent β equals 3/4 or 2/3. The analysis could be remarkably affected by sampling size, measurement error, and analysis methods, but these effects have not been explored systematically. We investigated the influences of these three factors based on a data set of 626 pairs of base metabolic rate and mass in mammals with the calculated β=0.711. Influence of sampling error was tested by re-sampling with different sample sizes using a Monte Carlo approach. Results showed that estimated parameter b varied considerably from sample to sample. For example, when the sample size was n=63, b varied from 0.582 to 0.776. Even though the original data set did not support either β=3/4 or β=2/3, we found that 39.0% of the samples supported β=2/3, 35.4% of the samples supported β=3/4. Influence of measurement error on parameter estimations was also tested using Bayesian theory. Virtual data sets were created using the mass in the above-mentioned data set, with given parameters α and β (β=2/3 or β=3/4) and certain measurement error in base metabolic rate and/or mass. Results showed that as measurement error increased, more estimated bs were found to be significantly different from the parameter β. When measurement error (i.e., standard deviation) was 20% and 40% of the measured mass and base metabolic rate, 15.4% and 14.6% of the virtual data sets were found to be significant different from the parameter β=3/4 and β=2/3, respectively. Influence of different analysis methods on parameter estimations was also demonstrated using the original data set and the pros and cons of these methods were further discussed. We urged cautions in interpreting the power law analysis, especially from a small data sample, and

  2. PREDICTION OF THE GRAIN SIZE OF SUSPENDED SEDIMENT: IMPLICATIONS FOR CALCULATING SUSPENDED SEDIMENT CONCENTRATIONS USING SINGLE FREQUENCY ACOUSTIC BACKSCATTER

    Institute of Scientific and Technical Information of China (English)

    R. A. KUHNLE; D. G. WREN; J. P. CHAMBERS

    2007-01-01

    Collection of samples of suspended sediment transported by streams and rivers is difficult and expensive. Emerging technologies, such as acoustic backscatter, have promise to decrease costs and allow more thorough sampling of transported sediment in streams and rivers. Acoustic backscatter information may be used to calculate the concentration of suspended sand-sized sediment given the vertical distribution of sediment size. Therefore, procedures to accurately compute suspended sediment size distributions from easily obtained river data are badly needed. In this study, techniques to predict the size of suspended sand are examined and their application to measuring concentrations using acoustic backscatter data are explored. Three methods to predict the size of sediment in suspension using bed sediment, flow criteria, and a modified form of the Rouse equation yielded mean suspended sediment sizes that differed from means of measured data by 7 to 50 percent. When one sample near the bed was used as a reference, mean error was reduced to about 5 percent. These errors in size determination translate into errors of 7 to 156 percent in the prediction of sediment concentration using backscatter data from 1 MHz single frequency acoustics.

  3. Space resection model calculation based on Random Sample Consensus algorithm

    Science.gov (United States)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  4. X-Ray Dose and Spot Size Calculations for the DARHT-II Distributed Target

    International Nuclear Information System (INIS)

    The baseline DARHT-II converter target consists of foamed tantalum within a solid-density cylindrical tamper. The baseline design has been modified by D. Ho to further optimize the integrated line density of material in the course of multiple beam pulses. LASNEX simulations of the hydrodynamic expansion of the target have been performed by D. Ho (documented elsewhere). The resulting density profiles have been used as inputs in the MCNP radiation transport code to calculate the X-ray dose and spot size assuming a incoming Gaussian electron beam with σ = 0.65mm, and a PIC-generated beam taking into account the ''swept'' spot emerging from the DARHT-II kicker system. A prerequisite to these calculations is the absorption spectrum of air. In order to obtain this, a separate series of MCNP runs was performed for a set of monoenergetic photon sources, tallying the energy deposited in a volume of air. The forced collision feature was used to improve the statistics since the photon mean free path in air is extremely long at the energies of interest. A sample input file is given below. The resulting data for the MCNP DE and DF cards is shown in the beam-pulse input files, one of which is listed below. Note that the DE and DF cards are entered in column format for easy reading

  5. Limitations of mRNA amplification from small-size cell samples

    Directory of Open Access Journals (Sweden)

    Myklebost Ola

    2005-10-01

    Full Text Available Abstract Background Global mRNA amplification has become a widely used approach to obtain gene expression profiles from limited material. An important concern is the reliable reflection of the starting material in the results obtained. This is especially important with extremely low quantities of input RNA where stochastic effects due to template dilution may be present. This aspect remains under-documented in the literature, as quantitative measures of data reliability are most often lacking. To address this issue, we examined the sensitivity levels of each transcript in 3 different cell sample sizes. ANOVA analysis was used to estimate the overall effects of reduced input RNA in our experimental design. In order to estimate the validity of decreasing sample sizes, we examined the sensitivity levels of each transcript by applying a novel model-based method, TransCount. Results From expression data, TransCount provided estimates of absolute transcript concentrations in each examined sample. The results from TransCount were used to calculate the Pearson correlation coefficient between transcript concentrations for different sample sizes. The correlations were clearly transcript copy number dependent. A critical level was observed where stochastic fluctuations became significant. The analysis allowed us to pinpoint the gene specific number of transcript templates that defined the limit of reliability with respect to number of cells from that particular source. In the sample amplifying from 1000 cells, transcripts expressed with at least 121 transcripts/cell were statistically reliable and for 250 cells, the limit was 1806 transcripts/cell. Above these thresholds, correlation between our data sets was at acceptable values for reliable interpretation. Conclusion These results imply that the reliability of any amplification experiment must be validated empirically to justify that any gene exists in sufficient quantity in the input material. This

  6. Non-uniform sampled scalar diffraction calculation using non-uniform fast Fourier transform

    OpenAIRE

    Shimobaba, Tomoyoshi; Kakue, Takashi; Oikawa, Minoru; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi

    2013-01-01

    Scalar diffraction calculations such as the angular spectrum method (ASM) and Fresnel diffraction, are widely used in the research fields of optics, X-rays, electron beams, and ultrasonics. It is possible to accelerate the calculation using fast Fourier transform (FFT); unfortunately, acceleration of the calculation of non-uniform sampled planes is limited due to the property of the FFT that imposes uniform sampling. In addition, it gives rise to wasteful sampling data if we calculate a plane...

  7. Two Test Items to Explore High School Students' Beliefs of Sample Size When Sampling from Large Populations

    Science.gov (United States)

    Bill, Anthony; Henderson, Sally; Penman, John

    2010-01-01

    Two test items that examined high school students' beliefs of sample size for large populations using the context of opinion polls conducted prior to national and state elections were developed. A trial of the two items with 21 male and 33 female Year 9 students examined their naive understanding of sample size: over half of students chose a…

  8. Sample size and scene identification (cloud) - Effect on albedo

    Science.gov (United States)

    Vemury, S. K.; Stowe, L.; Jacobowitz, H.

    1984-01-01

    Scan channels on the Nimbus 7 Earth Radiation Budget instrument sample radiances from underlying earth scenes at a number of incident and scattering angles. A sampling excess toward measurements at large satellite zenith angles is noted. Also, at large satellite zenith angles, the present scheme for scene selection causes many observations to be classified as cloud, resulting in higher flux averages. Thus the combined effect of sampling bias and scene identification errors is to overestimate the computed albedo. It is shown, using a process of successive thresholding, that observations with satellite zenith angles greater than 50-60 deg lead to incorrect cloud identification. Elimination of these observations has reduced the albedo from 32.2 to 28.8 percent. This reduction is very nearly the same and in the right direction as the discrepancy between the albedoes derived from the scanner and the wide-field-of-view channels.

  9. Utility of Inferential Norming with Smaller Sample Sizes

    Science.gov (United States)

    Zhu, Jianjun; Chen, Hsin-Yi

    2011-01-01

    We examined the utility of inferential norming using small samples drawn from the larger "Wechsler Intelligence Scales for Children-Fourth Edition" (WISC-IV) standardization data set. The quality of the norms was estimated with multiple indexes such as polynomial curve fit, percentage of cases receiving the same score, average absolute score…

  10. 10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle...

  11. Sample size reduction in groundwater surveys via sparse data assimilation

    KAUST Repository

    Hussain, Z.

    2013-04-01

    In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.

  12. Enhanced Z-LDA for Small Sample Size Training in Brain-Computer Interface Systems

    Directory of Open Access Journals (Sweden)

    Dongrui Gao

    2015-01-01

    Full Text Available Background. Usually the training set of online brain-computer interface (BCI experiment is small. For the small training set, it lacks enough information to deeply train the classifier, resulting in the poor classification performance during online testing. Methods. In this paper, on the basis of Z-LDA, we further calculate the classification probability of Z-LDA and then use it to select the reliable samples from the testing set to enlarge the training set, aiming to mine the additional information from testing set to adjust the biased classification boundary obtained from the small training set. The proposed approach is an extension of previous Z-LDA and is named enhanced Z-LDA (EZ-LDA. Results. We evaluated the classification performance of LDA, Z-LDA, and EZ-LDA on simulation and real BCI datasets with different sizes of training samples, and classification results showed EZ-LDA achieved the best classification performance. Conclusions. EZ-LDA is promising to deal with the small sample size training problem usually existing in online BCI system.

  13. Calculational study on irradiation of americium fuel samples in the Petten High Flux Reactor

    International Nuclear Information System (INIS)

    A calculational study on the irradiation of americium samples in the Petten High Flux Reactor (HFR) has been performed. This has been done in the framework of the international EFTTRA cooperation. For several reasons the americium in the samples is supposed to be diluted with a neutron inert matrix, but the main reason is to limit the power density in the sample. The low americium nuclide density in the sample (10 weight % americium oxide) leads to a low radial dependence of the burnup. Three different calculational methods have been used to calculate the burnup in the americium sample: Two-dimensional calculations with WIMS-6, one-dimensional calculations with WIMS-6, and one-dimensional calculations with SCALE. The results of the different methods agree fairly well. It is concluded that the radiotoxicity of the americium sample can be reduced upon irradiation in our scenario. This is especially the case for the radiotoxicity between 100 and 1000 years after storage. (orig.)

  14. Sample Size in Differential Item Functioning: An Application of Hierarchical Linear Modeling

    Science.gov (United States)

    Acar, Tulin

    2011-01-01

    The purpose of this study is to examine the number of DIF items detected by HGLM at different sample sizes. Eight different sized data files have been composed. The population of the study is 798307 students who had taken the 2006 OKS Examination. 10727 students of 798307 are chosen by random sampling method as the sample of the study. Turkish,…

  15. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    Science.gov (United States)

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  16. Structured estimation - Sample size reduction for adaptive pattern classification

    Science.gov (United States)

    Morgera, S.; Cooper, D. B.

    1977-01-01

    The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.

  17. Efficiency of whole-body counter for various body size calculated by MCNP5 software

    International Nuclear Information System (INIS)

    The efficiency of a whole-body counter for 137Cs and 40K was calculated using the MCNP5 code. The ORNL phantoms of a human body of different body sizes were applied in a sitting position in front of a detector. The aim was to investigate the dependence of efficiency on the body size (age) and the detector position with respect to the body and to estimate the accuracy of real measurements. The calculation work presented here is related to the NaI detector, which is available in the Serbian Whole-body Counter facility in Vinca Inst.. (authors)

  18. Determination of the size of a radiation source by the method of calculation of diffraction patterns

    Science.gov (United States)

    Tilikin, I. N.; Shelkovenko, T. A.; Pikuz, S. A.; Hammer, D. A.

    2013-07-01

    In traditional X-ray radiography, which has been used for various purposes since the discovery of X-ray radiation, the shadow image of an object under study is constructed based on the difference in the absorption of the X-ray radiation by different parts of the object. The main method that ensures a high spatial resolution is the method of point projection X-ray radiography, i.e., radiography from a point and bright radiation source. For projection radiography, the small size of the source is the most important characteristic of the source, which mainly determines the spatial resolution of the method. In this work, as a point source of soft X-ray radiation for radiography with a high spatial and temporal resolution, radiation from a hot spot of X-pinches is used. The size of the radiation source in different setups and configurations can be different. For four different high-current generators, we have calculated the sizes of sources of soft X-ray radiation from X-ray patterns of corresponding objects using Fresnel-Kirchhoff integrals. Our calculations show that the size of the source is in the range 0.7-2.8 μm. The method of the determination of the size of a radiation source from calculations of Fresnel-Kirchhoff integrals makes it possible to determine the size with an accuracy that exceeds the diffraction limit, which frequently restricts the resolution of standard methods.

  19. Distance software: design and analysis of distance sampling surveys for estimating population size

    Science.gov (United States)

    Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon RB; Marques, Tiago A; Burnham, Kenneth P

    2010-01-01

    1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the

  20. Theory of Finite Size Effects for Electronic Quantum Monte Carlo Calculations of Liquids and Solids

    CERN Document Server

    Holzmann, Markus; Morales, Miguel A; Tubmann, Norm M; Ceperley, David M; Pierleoni, Carlo

    2016-01-01

    Concentrating on zero temperature Quantum Monte Carlo calculations of electronic systems, we give a general description of the theory of finite size extrapolations of energies to the thermodynamic limit based on one and two-body correlation functions. We introduce new effective procedures, such as using the potential and wavefunction split-up into long and short range functions to simplify the method and we discuss how to treat backflow wavefunctions. Then we explicitly test the accuracy of our method to correct finite size errors on example hydrogen and helium many-body systems and show that the finite size bias can be drastically reduced for even small systems.

  1. Systematic study of finite-size effects in quantum Monte Carlo calculations of real metallic systems

    Energy Technology Data Exchange (ETDEWEB)

    Azadi, Sam, E-mail: s.azadi@imperial.ac.uk; Foulkes, W. M. C. [Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-09-14

    We present a systematic and comprehensive study of finite-size effects in diffusion quantum Monte Carlo calculations of metals. Several previously introduced schemes for correcting finite-size errors are compared for accuracy and efficiency, and practical improvements are introduced. In particular, we test a simple but efficient method of finite-size correction based on an accurate combination of twist averaging and density functional theory. Our diffusion quantum Monte Carlo results for lithium and aluminum, as examples of metallic systems, demonstrate excellent agreement between all of the approaches considered.

  2. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    OpenAIRE

    Malhotra Rajeev; Indrayan A

    2010-01-01

    Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size ...

  3. Evaluation of design flood estimates with respect to sample size

    Science.gov (United States)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  4. Sample size considerations for one-to-one animal transmission studies of the influenza A viruses.

    Directory of Open Access Journals (Sweden)

    Hiroshi Nishiura

    Full Text Available BACKGROUND: Animal transmission studies can provide important insights into host, viral and environmental factors affecting transmission of viruses including influenza A. The basic unit of analysis in typical animal transmission experiments is the presence or absence of transmission from an infectious animal to a susceptible animal. In studies comparing two groups (e.g. two host genetic variants, two virus strains, or two arrangements of animal cages, differences between groups are evaluated by comparing the proportion of pairs with successful transmission in each group. The present study aimed to discuss the significance and power to estimate transmissibility and identify differences in the transmissibility based on one-to-one trials. The analyses are illustrated on transmission studies of influenza A viruses in the ferret model. METHODOLOGY/PRINCIPAL FINDINGS: Employing the stochastic general epidemic model, the basic reproduction number, R₀, is derived from the final state of an epidemic and is related to the probability of successful transmission during each one-to-one trial. In studies to estimate transmissibility, we show that 3 pairs of infectious/susceptible animals cannot demonstrate a significantly higher transmissibility than R₀= 1, even if infection occurs in all three pairs. In comparisons between two groups, at least 4 pairs of infectious/susceptible animals are required in each group to ensure high power to identify significant differences in transmissibility between the groups. CONCLUSIONS: These results inform the appropriate sample sizes for animal transmission experiments, while relating the observed proportion of infected pairs to R₀, an interpretable epidemiological measure of transmissibility. In addition to the hypothesis testing results, the wide confidence intervals of R₀ with small sample sizes also imply that the objective demonstration of difference or similarity should rest on firmly calculated sample size.

  5. Design and sample size considerations for simultaneous global drug development program.

    Science.gov (United States)

    Huang, Qin; Chen, Gang; Yuan, Zhilong; Lan, K K Gordon

    2012-09-01

    Due to the potential impact of ethnic factors on clinical outcomes, the global registration of a new treatment is challenging. China and Japan often require local trials in addition to a multiregional clinical trial (MRCT) to support the efficacy and safety claim of the treatment. The impact of ethnic factors on the treatment effect has been intensively investigated and discussed from different perspectives. However, most current methods are focusing on the assessment of the consistency or similarity of the treatment effect between different ethnic groups in exploratory nature. In this article, we propose a new method for the design and sample size consideration for a simultaneous global drug development program (SGDDP) using weighted z-tests. In the proposed method, to test the efficacy of a new treatment for the targeted ethnic (TE) group, a weighted test that combines the information collected from both the TE group and the nontargeted ethnic (NTE) group is used. The influence of ethnic factors and local medical practice on the treatment effect is accounted for by down-weighting the information collected from NTE group in the combined test statistic. This design controls rigorously the overall false positive rate for the program at a given level. The sample sizes needed for the TE group in an SGDDP for three most commonly used efficacy endpoints, continuous, binary, and time-to-event, are then calculated. PMID:22946950

  6. Impact of Sample Size on the Performance of Multiple-Model Pharmacokinetic Simulations▿

    OpenAIRE

    Tam, Vincent H.; Kabbara, Samer; Yeh, Rosa F.; Leary, Robert H.

    2006-01-01

    Monte Carlo simulations are increasingly used to predict pharmacokinetic variability of antimicrobials in a population. We investigated the sample size necessary to provide robust pharmacokinetic predictions. To obtain reasonably robust predictions, a nonparametric model derived from a sample population size of ≥50 appears to be necessary as the input information.

  7. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    Science.gov (United States)

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  8. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    Science.gov (United States)

    Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…

  9. Using the Student's "t"-Test with Extremely Small Sample Sizes

    Science.gov (United States)

    de Winter, J. C .F.

    2013-01-01

    Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…

  10. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    Science.gov (United States)

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  11. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    Science.gov (United States)

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  12. Crack lengths calculation by the unloading compliance technique for Charpy size specimens

    International Nuclear Information System (INIS)

    The problems with the crack length determination by the unloading compliance method are well known for Charpy size specimens. The final crack lengths calculated for bent specimens do not fulfil ASTM 1820 accuracy requirements. Therefore some investigations have been performed to resolve this problem. In those studies it was considered that the measured compliance should be corrected for various factors, but satisfying results were not obtained. In the presented work the problem was attacked from the other side, the measured specimen compliance was taken as a correct value and what had to be adjusted was the calculation procedure. On the basis of experimentally obtained compliances of bent specimens and optically measured crack lengths the investigation was carried out. Finally, a calculation procedure enabling accurate crack length calculation up to 5 mm of plastic deflection was developed. Applying the new procedure, out of investigated 238 measured crack lengths, more than 80% of the values fulfilled the ASTM 1820 accuracy requirements, while presently used procedure provided only about 30% of valid results. The newly proposed procedure can be also prospectively used in modified form for specimens of a size different than Charpy size. (orig.)

  13. Thermomagnetic behavior of magnetic susceptibility – heating rate and sample size effects

    Directory of Open Access Journals (Sweden)

    Diana eJordanova

    2016-01-01

    Full Text Available Thermomagnetic analysis of magnetic susceptibility k(T was carried out for a number of natural powder materials from soils, baked clay and anthropogenic dust samples using fast (11oC/min and slow (6.5oC/min heating rates available in the furnace of Kappabridge KLY2 (Agico. Based on the additional data for mineralogy, grain size and magnetic properties of the studied samples, behaviour of k(T cycles and the observed differences in the curves for fast and slow heating rate are interpreted in terms of mineralogical transformations and Curie temperatures (Tc. The effect of different sample size is also explored, using large volume and small volume of powder material. It is found that soil samples show enhanced information on mineralogical transformations and appearance of new strongly magnetic phases when using fast heating rate and large sample size. This approach moves the transformation at higher temperature, but enhances the amplitude of the signal of newly created phase. Large sample size gives prevalence of the local micro- environment, created by evolving gases, released during transformations. The example from archeological brick reveals the effect of different sample sizes on the observed Curie temperatures on heating and cooling curves, when the magnetic carrier is substituted magnetite (Mn0.2Fe2.70O4. Large sample size leads to bigger differences in Tcs on heating and cooling, while small sample size results in similar Tcs for both heating rates.

  14. Size-resolved culturable airborne bacteria sampled in rice field, sanitary landfill, and waste incineration sites.

    Science.gov (United States)

    Heo, Yongju; Park, Jiyeon; Lim, Sung-Il; Hur, Hor-Gil; Kim, Daesung; Park, Kihong

    2010-08-01

    Size-resolved bacterial concentrations in atmospheric aerosols sampled by using a six stage viable impactor at rice field, sanitary landfill, and waste incinerator sites were determined. Culture-based and Polymerase Chain Reaction (PCR) methods were used to identify the airborne bacteria. The culturable bacteria concentration in total suspended particles (TSP) was found to be the highest (848 Colony Forming Unit (CFU)/m(3)) at the sanitary landfill sampling site, while the rice field sampling site has the lowest (125 CFU/m(3)). The closed landfill would be the main source of the observed bacteria concentration at the sanitary landfill. The rice field sampling site was fully covered by rice grain with wetted conditions before harvest and had no significant contribution to the airborne bacteria concentration. This might occur because the dry conditions favor suspension of soil particles and this area had limited personnel and vehicle flow. The respirable fraction calculated by particles less than 3.3 mum was highest (26%) at the sanitary landfill sampling site followed by waste incinerator (19%) and rice field (10%), which showed a lower level of respiratory fraction compared to previous literature values. We identified 58 species in 23 genera of culturable bacteria, and the Microbacterium, Staphylococcus, and Micrococcus were the most abundant genera at the sanitary landfill, waste incinerator, and rice field sites, respectively. An antibiotic resistant test for the above bacteria (Micrococcus sp., Microbacterium sp., and Staphylococcus sp.) showed that the Staphylococcus sp. had the strongest resistance to both antibiotics (25.0% resistance for 32 microg ml(-1) of Chloramphenicol and 62.5% resistance for 4 microg ml(-1) of Gentamicin). PMID:20623053

  15. Sample size for the estimate of consumer price subindices with alternative statistical designs

    OpenAIRE

    Carlo De Gregorio

    2012-01-01

    This paper analyses the sample sizes needed to estimate Laspeyres consumer price subindices under a combination of alternative sample designs, aggregation methods and temporal targets. In a simplified consumer market, the definition of the statistical target has been founded on the methodological framework adopted for the Harmonized Index of Consumer Prices. For a given precision level, sample size needs have been simulated under simple and stratified random designs with three distinct approa...

  16. Novel sample introduction system to reduce ICP-OES sample size for plutonium metal trace impurity determination

    International Nuclear Information System (INIS)

    A new methodology for trace elemental analysis in plutonium metal samples was developed by interfacing the novel micro-FAST sample introduction system with an ICP-OES instrument. This integrated system, especially when coupled with a low flow rate nebulization technique, reduced the sample volume requirement significantly. Improvements to instrument sensitivity and measurement precision, as well as long term stability, were also achieved by this modified ICP-OES system. The sample size reduction, together with other instrument performance merits, is of great significance, especially to nuclear material analysis. (author)

  17. Full-scale calculation of the coupling losses in ITER size cable-in-conduit conductors

    Science.gov (United States)

    van Lanen, E. P. A.; van Nugteren, J.; Nijhuis, A.

    2012-02-01

    With the numerical cable model JackPot it is possible to calculate the interstrand coupling losses, generated by a time-changing background and self-field, between all strands in a cable-in-conduit conductor (CICC). For this, the model uses a system of equations in which the mutual inductances between all strand segments are calculated in advance. The model works well for analysing sub-size CICC sections. However, the exponential relationship between the model size and the computation time make it unpractical to simulate full size ITER CICC sections. For this reason, the multi-level fast multipole method (MLFMM) is implemented to control the computation load. For additional efficiency, it is written in a code that runs on graphics processing units, thereby utilizing an efficient low-cost parallel computation technique. A good accuracy is obtained with a considerably fast computation of the mutually induced voltages between all strands. This allows parametric studies on the coupling loss of long lengths of ITER size CICCs with the purpose of optimizing the cable design and to accurately compute the coupling loss for any applied magnetic field scenario.

  18. Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach

    Science.gov (United States)

    de la Torre, Jimmy; Hong, Yuan

    2010-01-01

    Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…

  19. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

    Directory of Open Access Journals (Sweden)

    Wei Lin Teoh

    Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.

  20. On the importance of sampling variance to investigations of temporal variation in animal population size

    Science.gov (United States)

    Link, W.A.; Nichols, J.D.

    1994-01-01

    Our purpose here is to emphasize the need to properly deal with sampling variance when studying population variability and to present a means of doing so. We present an estimator for temporal variance of population size for the general case in which there are both sampling variances and covariances associated with estimates of population size. We illustrate the estimation approach with a series of population size estimates for black-capped chickadees (Parus atricapillus) wintering in a Connecticut study area and with a series of population size estimates for breeding populations of ducks in southwestern Manitoba.

  1. XAFSmass: a program for calculating the optimal mass of XAFS samples

    Science.gov (United States)

    Klementiev, K.; Chernikov, R.

    2016-05-01

    We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.

  2. Aerosol composition at Chacaltaya, Bolivia, as determined by size-fractionated sampling

    Science.gov (United States)

    Adams, F.; van Espen, P.; Maenhaut, W.

    Thirty-four cascade-impactor samples were collected between September 1977 and November 1978 at Chacaltaya, Bolivia. The concentrations of 25 elements were measured for the six impaction stages of each sample by means of energy-dispersive X-ray fluorescence and proton-induced X-ray emission analysis. The results indicated that most elements are predominantly associated with a unimodal coarse-particle soil-dustdispersion component. Also chlorine and the alkali and alkaline earth elements belong to this group. The anomalously enriched elements (S, Br and the heavy metals Cu, Zn, Ga, As, Se, Pb and Bi) showed a bimodal size distribution. Correlation coefficient calculations and principal component analysis indicated the presence in the submicrometer aerosol mode of an important component, containing S, K, Zn, As and Br, which may originate from biomass burning. For certain enriched elements (i.e. Zn and perhaps Cu) the coarse-particle enrichments observed may be the result of the true crust-air fractionation during soil-dust dispersion.

  3. Gamma self-shielding correction factors calculation for aqueous bulk sample analysis by PGNAA technique.

    Science.gov (United States)

    Nasrabadi, M N; Mohammadi, A; Jalali, M

    2009-01-01

    In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required. PMID:19328700

  4. Gamma self-shielding correction factors calculation for aqueous bulk sample analysis by PGNAA technique

    Energy Technology Data Exchange (ETDEWEB)

    Nasrabadi, M.N. [Department of Nuclear Engineering, Faculty of Modern Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of)], E-mail: mnnasrabadi@ast.ui.ac.ir; Mohammadi, A. [Department of Physics, Payame Noor University (PNU), Kohandej, Isfahan (Iran, Islamic Republic of); Jalali, M. [Isfahan Nuclear Science and Technology Research Institute (NSTRT), Reactor and Accelerators Research and Development School, Atomic Energy Organization of Iran (Iran, Islamic Republic of)

    2009-07-15

    In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.

  5. Mineralogical, optical, geochemical, and particle size properties of four sediment samples for optical physics research

    Science.gov (United States)

    Bice, K.; Clement, S. C.

    1981-01-01

    X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.

  6. Analysis of AC loss in superconducting power devices calculated from short sample data

    OpenAIRE

    Rabbers, J.J.; Haken, ten, Bennie; Kate, ten, F.J.W.

    2003-01-01

    A method to calculate the AC loss of superconducting power devices from the measured AC loss of a short sample is developed. In coils and cables the magnetic field varies spatially. The position dependent field vector is calculated assuming a homogeneous current distribution. From this field profile and the transport current, the local AC loss is calculated. Integration over the conductor length yields the AC loss of the device. The total AC loss of the device is split up in different compone...

  7. Air and smear sample calculational tool for Fluor Hanford Radiological control

    International Nuclear Information System (INIS)

    A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF--13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alpha and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample

  8. Analysis of AC loss in superconducting power devices calculated from short sample data

    NARCIS (Netherlands)

    Rabbers, J.J.; Haken, ten B.; Kate, ten H.H.J.

    2003-01-01

    A method to calculate the AC loss of superconducting power devices from the measured AC loss of a short sample is developed. In coils and cables the magnetic field varies spatially. The position dependent field vector is calculated assuming a homogeneous current distribution. From this field profile

  9. The validity of the transport approximation in critical-size and reactivity calculations

    International Nuclear Information System (INIS)

    The validity of the transport approximation in critical-size and reactivity calculations. Elastically scattered neutrons are, in general, not distributed isotropically in the laboratory system, and a convenient way of taking this into account in neutron- transport calculations is to use the transport approximation. In this, the elastic cross-section is replaced by an elastic transport cross-section with an isotropic angular distribution. This leads to a considerable simplification in the neutron-transport calculation. In the present paper, the theoretical bases of the transport approximation in both one-group and many-group formalisms are given. The accuracy of the approximation is then studied in the multi-group case for a number of typical systems by means of the Sn method using the isotropic and anisotropic versions of the method, which exist as alternative options of the machine code SAINT written at Aldermaston for use on IBM-709/7090 machines. The dependence of the results of the anisotropic calculations on the number of moments used to represent the angular distributions is also examined. The results of the various calculations are discussed, and an indication is given of the types of system for which the transport approximation is adequate and of those for which it is inadequate. (author)

  10. Size constrained unequal probability sampling with a non-integer sum of inclusion probabilities

    OpenAIRE

    Grafström, Anton; Qualité, Lionel; Tillé, Yves; Matei, Alina

    2012-01-01

    More than 50 methods have been developed to draw unequal probability samples with fixed sample size. All these methods require the sum of the inclusion probabilities to be an integer number. There are cases, however, where the sum of desired inclusion probabilities is not an integer. Then, classical algorithms for drawing samples cannot be directly applied. We present two methods to overcome the problem of sample selection with unequal inclusion probabilities when their sum is not an integer ...

  11. Sample size estimation for correlations with pre-specified confidence interval

    Directory of Open Access Journals (Sweden)

    Murray Moinester

    2014-09-01

    Full Text Available A common measure of association between two variables x and y is the bivariate Pearson correlation coefficient rho(x,y that characterizes the strength and direction of any linear relationship between x and y. This article describes how to determine the optimal sample size for bivariate correlations,reviews available methods, and discusses their different ranges of applicability. A convenient equation is derived to help plan sample size for correlations by confidence interval analysis. In addition, a useful table for planning correlation studies is provided that gives sample sizes needed to achieve 95% confidence intervals (CI for correlation values ranging from 0.05 to 0.95 and for CI widths ranging from 0.1 to 0.9. Sample size requirements are considered for planning correlation studies.

  12. A reliability evaluation methodology for memory chips for space applications when sample size is small

    Science.gov (United States)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  13. Sample size estimation for correlations with pre-specified confidence interval

    OpenAIRE

    Murray Moinester; Ruth Gottfried

    2014-01-01

    A common measure of association between two variables x and y is the bivariate Pearson correlation coefficient rho(x,y) that characterizes the strength and direction of any linear relationship between x and y. This article describes how to determine the optimal sample size for bivariate correlations,reviews available methods, and discusses their different ranges of applicability. A convenient equation is derived to help plan sample size for correlations by confidence interval analysis. In add...

  14. Operational risk models and maximum likelihood estimation error for small sample-sizes

    OpenAIRE

    Paul Larsen

    2015-01-01

    Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by as...

  15. Sample Size Planning for Longitudinal Models: Accuracy in Parameter Estimation for Polynomial Change Parameters

    Science.gov (United States)

    Kelley, Ken; Rausch, Joseph R.

    2011-01-01

    Longitudinal studies are necessary to examine individual change over time, with group status often being an important variable in explaining some individual differences in change. Although sample size planning for longitudinal studies has focused on statistical power, recent calls for effect sizes and their corresponding confidence intervals…

  16. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

    Science.gov (United States)

    Schoeneberger, Jason A.

    2016-01-01

    The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

  17. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    Science.gov (United States)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  18. Shrinkage anisotropy characteristics from soil structure and initial sample/layer size

    CERN Document Server

    Chertkov, V Y

    2014-01-01

    The objective of this work is a physical prediction of such soil shrinkage anisotropy characteristics as variation with drying of (i) different sample/layer sizes and (ii) the shrinkage geometry factor. With that, a new presentation of the shrinkage anisotropy concept is suggested through the sample/layer size ratios. The work objective is reached in two steps. First, the relations are derived between the indicated soil shrinkage anisotropy characteristics and three different shrinkage curves of a soil relating to: small samples (without cracking at shrinkage), sufficiently large samples (with internal cracking), and layers of similar thickness. Then, the results of a recent work with respect to the physical prediction of the three shrinkage curves are used. These results connect the shrinkage curves with the initial sample size/layer thickness as well as characteristics of soil texture and structure (both inter- and intra-aggregate) as physical parameters. The parameters determining the reference shrinkage c...

  19. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  20. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments

    OpenAIRE

    Geweke, John F.

    1991-01-01

    Data augmentation and Gibbs sampling are two closely related, sampling-based approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accuracy of the approximations to the expected value of functions of interest under the posterior. In this paper methods for spectral analysis are used to evaluate numerical accuracy formally and construc...

  1. Use of pharmacogenetics in bioequivalence studies to reduce sample size: an example with mirtazapine and CYP2D6.

    Science.gov (United States)

    González-Vacarezza, N; Abad-Santos, F; Carcas-Sansuan, A; Dorado, P; Peñas-Lledó, E; Estévez-Carrizo, F; Llerena, A

    2013-10-01

    In bioequivalence studies, intra-individual variability (CV(w)) is critical in determining sample size. In particular, highly variable drugs may require enrollment of a greater number of subjects. We hypothesize that a strategy to reduce pharmacokinetic CV(w), and hence sample size and costs, would be to include subjects with decreased metabolic enzyme capacity for the drug under study. Therefore, two mirtazapine studies, two-way, two-period crossover design (n=68) were re-analysed to calculate the total CV(w) and the CV(w)s in three different CYP2D6 genotype groups (0, 1 and ≥ 2 active genes). The results showed that a 29.2 or 15.3% sample size reduction would have been possible if the recruitment had been of individuals carrying just 0 or 0 plus 1 CYP2D6 active genes, due to the lower CV(w). This suggests that there may be a role for pharmacogenetics in the design of bioequivalence studies to reduce sample size and costs, thus introducing a new paradigm for the biopharmaceutical evaluation of drug products. PMID:22733239

  2. Bolton tooth size ratio among Sudanese Population sample: A preliminary study

    OpenAIRE

    Abdalla Hashim, Ala’a Hayder; Eldin, AL-Hadi Mohi; Hashim, Hayder Abdalla

    2015-01-01

    Background: The study of the mesiodistal size, the morphology of teeth and dental arch may play an important role in clinical dentistry, as well as other sciences such as Forensic Dentistry and Anthropology. Aims: The aims of the present study were to establish tooth-size ratio in Sudanese sample with Class I normal occlusion, to compare the tooth-size ratio between the present study and Bolton's study and between genders. Materials and Methods: The sample consisted of dental casts of 60 subj...

  3. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H2/O2/Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  4. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    Science.gov (United States)

    Finch, W. Holmes; Finch, Maria E. Hernandez

    2016-01-01

    Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…

  5. Computer program for sample sizes required to determine disease incidence in fish populations

    Science.gov (United States)

    Ossiander, Frank J.; Wedemeyer, Gary

    1973-01-01

    A computer program is described for generating the sample size tables required in fish hatchery disease inspection and certification. The program was designed to aid in detection of infectious pancreatic necrosis (IPN) in salmonids, but it is applicable to any fish disease inspection when the sampling plan follows the hypergeometric distribution.

  6. Minimum Sample Size for Cronbach's Coefficient Alpha: A Monte-Carlo Study

    Science.gov (United States)

    Yurdugul, Halil

    2008-01-01

    The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…

  7. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  8. Is there an alternative to increasing the sample size in microarray studies?

    OpenAIRE

    Klebanov, Lev; Yakovlev, Andrei

    2007-01-01

    Our answer to the question posed in the title is negative. This intentionally provocative note discusses the issue of sample size in microarray studies from several angles. We suggest that the current view of microarrays as no more than a screening tool be changed and small sample studies no longer be considered appropriate.

  9. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    Science.gov (United States)

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  10. Optimal Inspection Policy for Three-state Systems Monitored by Variable Sample Size Control Charts

    OpenAIRE

    Wu, Shaomin

    2011-01-01

    This paper presents the expected long-run cost per unit time for a system monitored by an adaptive control chart with variable sample sizes: if the control chart signals that the system is out of control, the sampling which follows will be conducted with a larger sample size. The system is supposed to have three states: in-control, out-of-control, and failed. Two levels of repair are applied to maintain the system. A minor repair will be conducted if an assignable cause is c...

  11. Sample size for collecting germplasms – a polyploid model with mixed mating system

    Indian Academy of Sciences (India)

    R L Sapra; Prem Narain; S V S Chauhan; S K Lal; B B Singh

    2003-03-01

    The present paper discusses a general expression for determining the minimum sample size (plants) for a given number of seeds or vice versa for capturing multiple allelic diversity. The model considers sampling from a large 2 k-ploid population under a broad range of mating systems. Numerous expressions/results developed for germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate combination of number of plants and seeds per plant. When genotypic multiplicity of seeds is taken into consideration, a sample size of even less than 172 plants can conserve diversity of 20 alleles from 50,000 polymorphic loci with a very large probability of conservation (0.9999) in most of the cases.

  12. Optimisation of the T-square sampling method to estimate population sizes

    OpenAIRE

    Chalabi Zaid; Bostoen Kristof; Grais Rebecca F

    2007-01-01

    Abstract Population size and density estimates are needed to plan resource requirements and plan health related interventions. Sampling frames are not always available necessitating surveys using non-standard household sampling methods. These surveys are time-consuming, difficult to validate, and their implementation could be optimised. Here, we discuss an example of an optimisation procedure for rapid population estimation using T-Square sampling which has been used recently to estimate popu...

  13. Sampling benthic macroinvertebrates in a large flood-plain river: Considerations of study design, sample size, and cost

    Science.gov (United States)

    Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.

    1998-01-01

    Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites

  14. Analysis of Sample Size for Variables Related to Plant, Soil, and Soil Microbial Respiration in a Paddy Rice Field

    OpenAIRE

    Confalonieri, Roberto; Perego, Alessia; CHIODINI Marcello Ermido; SCAGLIA Barbara; ROSENMUND Alexandra; Acutis, Marco

    2009-01-01

    Pre-samplings for sample size determination are strongly recommended to assure the reliability of collected data. However, there is a certain dearth of references about sample size determination in field experiments. Seldom if ever, differences in sample size were identified under different management conditions, plant traits, varieties grown and crop age. In order to analyze any differences in sample size for some of the variables measurable in rice field experiments, the visual jackknife me...

  15. Sample-size guidelines for recalibrating crash prediction models: Recommendations for the highway safety manual.

    Science.gov (United States)

    Shirazi, Mohammadali; Lord, Dominique; Geedipally, Srinivas Reddy

    2016-08-01

    The Highway Safety Manual (HSM) prediction models are fitted and validated based on crash data collected from a selected number of states in the United States. Therefore, for a jurisdiction to be able to fully benefit from applying these models, it is necessary to calibrate or recalibrate them to local conditions. The first edition of the HSM recommends calibrating the models using a one-size-fits-all sample-size of 30-50 locations with total of at least 100 crashes per year. However, the HSM recommendation is not fully supported by documented studies. The objectives of this paper are consequently: (1) to examine the required sample size based on the characteristics of the data that will be used for the calibration or recalibration process; and, (2) propose revised guidelines. The objectives were accomplished using simulation runs for different scenarios that characterized the sample mean and variance of the data. The simulation results indicate that as the ratio of the standard deviation to the mean (i.e., coefficient of variation) of the crash data increases, a larger sample-size is warranted to fulfill certain levels of accuracy. Taking this observation into account, sample-size guidelines were prepared based on the coefficient of variation of the crash data that are needed for the calibration process. The guidelines were then successfully applied to the two observed datasets. The proposed guidelines can be used for all facility types and both for segment and intersection prediction models. PMID:27183517

  16. The calculation of a size correction factor for a whole-body counter

    Science.gov (United States)

    Carinou, E.; Koukouliou, V.; Budayova, M.; Potiriadis, C.; Kamenopoulou, V.

    2007-09-01

    Whole-Body counting techniques use radiation detectors in order to evaluate the internal exposure from radionuclides. The Whole-Body Counter (WBC) of the Greek Atomic Energy Commission (GAEC) is used for in vivo measurements of workers for routine purposes as well as for the public in case of an emergency. The system has been calibrated using the phantom provided by CANBERRA (RMC phantom) in combination with solid and point sources. Furthermore, four bottle phantoms of different sizes have been used to calibrate the system to measure potassium, 40K, for different sized workers. However, the use of different phantoms in combination with different sources is time consuming and expensive. Moreover, the purchase and construction of the reference standards need specific knowledge. An alternative option would be the use of Monte Carlo simulation. In this study, the Monte Carlo technique has been firstly validated using the 40K measurements of the four phantoms. After the validation of the methodology, the Monte Carlo code, MCNP, has been used with the same simulated geometries (phantom detector) and different sources in order to calculate the efficiency of the system for different photon energies in the four phantoms. The simulation energies correspond to the following radionuclides: 131I, 137Cs, 60Co, and 88Y. A size correction calibration factor has been defined in order to correct the efficiency of the system for the different phantoms and energies for uniform distribution. The factors vary from 0.64 to 1.51 depending on the phantom size and photon energy.

  17. Thoracic size-selective sampling of fibres: performance of four types of thoracic sampler in laboratory tests.

    Science.gov (United States)

    Jones, A D; Aitken, R J; Fabriès, J F; Kauffer, E; Liden, G; Maynard, A; Riediger, G; Sahle, W

    2005-08-01

    The counting of fibres on membrane filters could be facilitated by using size-selective samplers to exclude coarse particulate and fibres that impede fibre counting. Furthermore, the use of thoracic size selection would also remove the present requirement to discriminate fibres by diameter during counting. However, before thoracic samplers become acceptable for sampling fibres, their performance with fibres needs to be determined. This study examines the performance of four thoracic samplers: the GK2.69 cyclone, a Modified SIMPEDS cyclone, the CATHIA sampler (inertial separation) and the IOM thoracic sampler (porous foam pre-selector). The uniformity of sample deposit on the filter samples, which is important when counts are taken on random fields, was examined with two sizes of spherical particles (1 and 10 microm) and a glass fibre aerosol with fibres spanning the aerodynamic size range of the thoracic convention. Counts by optical microscopy examined fields on a set scanning pattern. Hotspots of deposition were detected for one of the thoracic samplers (Modified SIMPEDS with the 10 microm particles and the fibres). These hotspots were attributed to the inertial flow pattern near the port from the cyclone pre-separator. For the other three thoracic samplers, the distribution was similar to that on a cowled sampler, the current standard sampler for fibres. Aerodynamic selection was examined by comparing fibre concentration on thoracic samples with those measured on semi-isokinetic samples, using fibre size (and hence calculated aerodynamic diameter) and number data obtained by scanning electron microscope evaluation in four laboratories. The size-selection characteristics of three thoracic samplers (GK2.69, Modified SIMPEDS and CATHIA) appeared very similar to the thoracic convention; there was a slight oversampling (relative to the convention) for d(ae) < 7 microm, but that would not be disadvantageous for comparability with the cowled sampler. Only the IOM

  18. The Interaction of Entropy-Based Discretization and Sample Size: An Empirical Study

    CERN Document Server

    Bennett, Casey

    2012-01-01

    An empirical investigation of the interaction of sample size and discretization - in this case the entropy-based method CAIM (Class-Attribute Interdependence Maximization) - was undertaken to evaluate the impact and potential bias introduced into data mining performance metrics due to variation in sample size as it impacts the discretization process. Of particular interest was the effect of discretizing within cross-validation folds averse to outside discretization folds. Previous publications have suggested that discretizing externally can bias performance results; however, a thorough review of the literature found no empirical evidence to support such an assertion. This investigation involved construction of over 117,000 models on seven distinct datasets from the UCI (University of California-Irvine) Machine Learning Library and multiple modeling methods across a variety of configurations of sample size and discretization, with each unique "setup" being independently replicated ten times. The analysis revea...

  19. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  20. Mesh-size effects on drift sample composition as determined with a triple net sampler

    Science.gov (United States)

    Slack, K.V.; Tilley, L.J.; Kennelly, S.S.

    1991-01-01

    Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.

  1. On the role of dimensionality and sample size for unstructured and structured covariance matrix estimation

    Science.gov (United States)

    Morgera, S. D.; Cooper, D. B.

    1976-01-01

    The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.

  2. Minimum sample size for detection of Gutenberg-Richter's b-value

    CERN Document Server

    Kamer, Yavor

    2014-01-01

    In this study we address the question of the minimum sample size needed for distinguishing between Gutenberg-Richter distributions with varying b-values at different resolutions. In order to account for both the complete and incomplete parts of a catalog we use the recently introduced angular frequency magnitude distribution (FMD). Unlike the gradually curved FMD, the angular FMD is fully compatible with Aki's maximum likelihood method for b-value estimation. To obtain generic results we conduct our analysis on synthetic catalogs with Monte Carlo methods. Our results indicate that the minimum sample size used in many studies is strictly below the value required for detecting significant variations.

  3. Consideration of sample size for estimating contaminant load reductions using load duration curves

    Science.gov (United States)

    Babbar-Sebens, Meghna; Karthikeyan, R.

    2009-06-01

    SummaryIn Total Maximum Daily Load (TMDL) programs, load duration curves are often used to estimate reduction of contaminant loads in a watershed. A popular method for calculating these load reductions involves estimation of the 90th percentiles of monitored contaminant concentrations during different hydrologic conditions. However, water quality monitoring is expensive and can pose major limitations in collecting enough data. Availability of scarce water quality data can, therefore, deteriorate the precision in the estimates of the 90th percentiles, which, in turn, affects the accuracy of estimated load reductions. This paper proposes an adaptive sampling strategy that the data collection agencies can use for not only optimizing their collection of new samples across different hydrologic conditions, but also ensuring that newly collected samples provide opportunity for best possible improvements in the precision of the estimated 90th percentile with minimum sampling costs. The sampling strategy was used to propose sampling plans for Escherichia coli monitoring in an actual stream and different sampling procedures of the strategy were tested for hypothetical stream data. Results showed that improvement in precision using the proposed distributed sampling procedure is much better and faster than that attained via the lumped sampling procedure, for the same sampling cost. Hence, it is recommended that when agencies have a fixed sampling budget, they should collect samples in consecutive monitoring cycles as proposed by the distributed sampling procedure, rather than investing all their resources in only one monitoring cycle.

  4. Magnetic entropy change calculated from first principles based statistical sampling technique: Ni2 MnGa

    Science.gov (United States)

    Odbadrakh, Khorgolkhuu; Nicholson, Don; Eisenbach, Markus; Brown, Gregory; Rusanu, Aurelian; Materials Theory Group Team

    2014-03-01

    Magnetic entropy change in Magneto-caloric Effect materials is one of the key parameters in choosing materials appropriate for magnetic cooling and offers insight into the coupling between the materials' thermodynamic and magnetic degrees of freedoms. We present computational workflow to calculate the change of magnetic entropy due to a magnetic field using the DFT based statistical sampling of the energy landscape of Ni2MnGa. The statistical density of magnetic states is calculated with Wang-Landau sampling, and energies are calculated with the Locally Self-consistent Multiple Scattering technique. The high computational cost of calculating energies of each state from first principles is tempered by exploiting a model Hamiltonian fitted to the DFT based sampling. The workflow is described and justified. The magnetic adiabatic temperature change calculated from the statistical density of states agrees with the experimentally obtained value in the absence of structural transformation. The study also reveals that the magnetic subsystem alone cannot explain the large MCE observed in Ni2MnGa alloys. This work was performed at the ORNL, which is managed by UT-Batelle for the U.S. DOE. It was sponsored by the Division of Material Sciences and Engineering, OBES. This research used resources of the OLCF at ORNL, which is supported by the Office of Science of the U.S. DOE under Contract DE-AC05-00OR22725.

  5. Simulation analyses of space use: Home range estimates, variability, and sample size

    Science.gov (United States)

    Bekoff, M.; Mech, L.D.

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size {number of locations}. As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecffic variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  6. 40 CFR 600.211-08 - Sample calculation of fuel economy values for labeling.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample calculation of fuel economy values for labeling. 600.211-08 Section 600.211-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES...

  7. 40 CFR Appendix III to Part 600 - Sample Fuel Economy Label Calculation

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Label Calculation III Appendix III to Part 600 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App....

  8. Minimum graft size calculated from preoperative recipient status in living donor liver transplantation.

    Science.gov (United States)

    Marubashi, Shigeru; Nagano, Hiroaki; Eguchi, Hidetoshi; Wada, Hiroshi; Asaoka, Tadafumi; Tomimaru, Yoshito; Tomokuni, Akira; Umeshita, Koji; Doki, Yuichiro; Mori, Masaki

    2016-05-01

    Small-for-size graft syndrome is an inevitable complication in living donor liver transplantation (LDLT). We hypothesized that graft weight (GW) measured after graft procurement is one of the variables predicting postoperative graft function. A total of 138 consecutive recipients of adult-to-adult LDLT between March 1999 and October 2014 were included in this study. We investigated the factors associated with small-for-size-associated graft loss (SAGL) to determine the GW required for each patient. Both preoperatively assessed and postoperatively obtained risk factors for SAGL were analyzed in univariate and multivariate logistic regression analysis. Twelve (8.8%) of the transplant recipients had SAGL. In multivariate logistic regression analyses using preoperatively assessed variables, the preoperative Model for End-Stage Liver Disease (MELD) score (P recipient standard liver volume (SLV) ratio (P = 0.008) were independent predictors of SAGL. The recommended graft volume by preoperative computed tomography volumetry was calculated as SLV × (1.616 × MELD + 0.344)/100/0.85 (mL) [MELD ≥ 18.2], or SLV × 0.35 (mL) [MELD recipient, and patients with higher MELD scores require larger grafts or deceased donor whole liver transplant to avoid SAGL. Liver Transplantation 22 599-606 2016 AASLD. PMID:26684397

  9. Monte Carlo calculations for gamma-ray mass attenuation coefficients of some soil samples

    International Nuclear Information System (INIS)

    Highlights: • Gamma-ray mass attenuation coefficients of soils. • Radiation shielding properties of soil. • Comparison of calculated results with the theoretical and experimental ones. • The method can be applied to various media. - Abstract: We developed a simple Monte Carlo code to determine the mass attenuation coefficients of some soil samples at nine different gamma-ray energies (59.5, 80.9, 122.1, 159.0, 356.5, 511.0, 661.6, 1173.2 and 1332.5 keV). Results of the Monte Carlo calculations have been compared with tabulations based upon the results of photon cross section database (XCOM) and with experimental results by other researchers for the same samples. The calculated mass attenuation coefficients were found to be very close to the theoretical values and the experimental results

  10. Beta dose attenuation and calculations of effective grainsize in brick samples

    International Nuclear Information System (INIS)

    An assumption commonly made in estimating the beta-ray contribution to individual quartz grains in bricks and tiles is that prior to extraction all grains used for thermoluminescence (TL) analysis as discrete grains embedded in a clay matrix. We have found in many Utah brick samples that this assumption does not hold, and that the grains used for analysis were largely derived from clumps or agglomerations of quartz grains of up to 3 mm in diameter. If we apply beta-ray attenuation factors appropriate for the grain sizes actually analyzed (150-250 μm) rather than those appropriate for the agglomerations from which the analyzed samples were derived, errors in measurement of β-ray contribution in excess of 50% can result. We present details of a computer model for determining effective grain size in bricks and tiles based upon microscopic examination of sample sections. (author)

  11. The potential of using xylarium wood samples for wood density calculations: a comparison of approaches for volume measurement

    Directory of Open Access Journals (Sweden)

    Beeckman H

    2011-08-01

    Full Text Available Wood specific gravity (WSG is an important biometric variable for aboveground biomass calculations in tropical forests. Sampling a sufficient number of trees in remote tropical forests to represent the species and size distribution of a forest to generate information on WSG can be logistically challenging. Several thousands of wood samples exist in xylaria around the world that are easily accessible to researchers. We propose the use of wood samples held in xylaria as a valid and overlooked option. Due to the nature of xylarium samples, determining wood volume to calculate WSG presents several challenges. A description and assessment is provided of five different methods to measure wood sample volume: two solid displacement methods and three liquid displacement methods (hydrostatic methods. Two methods were specifically developed for this paper: the use of laboratory parafilm to wrap the wood samples for the hydrostatic method and two glass microbeads devices for the solid displacement method. We find that the hydrostatic method with samples not wrapped in laboratory parafilm is the most accurate and preferred method. The two methods developed for this study give close agreement with the preferred method (r 2 > 0.95. We show that volume can be estimated accurately for xylarium samples with the proposed methods. Additionally, the WSG for 53 species was measured using the preferred method. Significant differences exist between the WSG means of the measured species and the WSG means in an existing density database. Finally, for 4 genera in our dataset, the genus-level WSG average is representative of the species-level WSG average.

  12. ED-XRF set-up for size-segregated aerosol samples analysis

    OpenAIRE

    Bernardoni, V.; E. Cuccia; G. Calzolai; Chiari, M.; Lucarelli, F.; D. Massabo; Nava, S.; Prati, P.; Valli, G; Vecchi, R.

    2011-01-01

    The knowledge of size-segregated elemental concentrations in atmospheric particulate matter (PM) gives a useful contribution to the complete chemical characterisation; this information can be obtained by sampling with multi-stage cascade impactors. In this work, samples were collected using a low-pressure 12-stage Small Deposit Impactor and a 13-stage rotating Micro Orifice Uniform Deposit Impactor™. Both impactors collect the aerosol in an inhomogeneous geometry, which needs a special set-up...

  13. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  14. Sample Size and Repeated Measures Required in Studies of Foods in the Homes of African-American Families123

    OpenAIRE

    Stevens, June; Bryant, Maria; Wang, Chin-Hua; Cai, Jianwen; Bentley, Margaret E.

    2012-01-01

    Measurement of the home food environment is of interest to researchers because it affects food intake and is a feasible target for nutrition interventions. The objective of this study was to provide estimates to aid the calculation of sample size and number of repeated measures needed in studies of nutrients and foods in the home. We inventoried all foods in the homes of 80 African-American first-time mothers and determined 6 nutrient-related attributes. Sixty-three households were measured 3...

  15. Experimental and calculational analyses of actinide samples irradiated in EBR-II

    International Nuclear Information System (INIS)

    Higher actinides influence the characteristics of spent and recycled fuel and dominate the long-term hazards of the reactor waste. Reactor irradiation experiments provide useful benchmarks for testing the evaluated nuclear data for these actinides. During 1967 to 1970, several actinide samples were irradiated in the Idaho EBR-II fast reactor. These samples have now been analyzed, employing mass and alpha spectrometry, to determine the heavy element products. A simple spherical model for the EBR-II core and a recent version of the ORIGEN code with ENDF/B-V data were employed to calculate the exposure products. A detailed comparison between the experimental and calculated results has been made. For samples irradiated at locations near the core center, agreement within 10% was obtained for the major isotopes and their first daughters, and within 20% for the nuclides up the chain. A sensitivity analysis showed that the assumed flux should be increased by 10%

  16. Sample-Size Effects on the Compression Behavior of a Ni-BASED Amorphous Alloy

    Science.gov (United States)

    Liang, Weizhong; Zhao, Guogang; Wu, Linzhi; Yu, Hongjun; Li, Ming; Zhang, Lin

    Ni42Cu5Ti20Zr21.5Al8Si3.5 bulk metallic glasses rods with diameters of 1 mm and 3 mm, were prepared by arc melting of composing elements in a Ti-gettered argon atmosphere. The compressive deformation and fracture behavior of the amorphous alloy samples with different size were investigated by testing machine and scanning electron microscope. The compressive stress-strain curves of 1 mm and 3 mm samples exhibited 4.5% and 0% plastic strain, while the compressive fracture strength for 1 mm and 3 mm rod is 4691 MPa and 2631 MPa, respectively. The compressive fracture surface of different size sample consisted of shear zone and non-shear one. Typical vein patterns with some melting drops can be seen on the shear region of 1 mm rod, while fish-bone shape patterns can be observed on 3 mm specimen surface. Some interesting different spacing periodic ripples existed on the non-shear zone of 1 and 3 mm rods. On the side surface of 1 mm sample, high density of shear bands was observed. The skip of shear bands can be seen on 1 mm sample surface. The mechanisms of the effect of sample size on fracture strength and plasticity of the Ni-based amorphous alloy are discussed.

  17. Pre-drilling calculation of geomechanical parameters for safe geothermal wells based on outcrop analogue samples

    Science.gov (United States)

    Reyer, Dorothea; Philipp, Sonja

    2014-05-01

    It is desirable to enlarge the profit margin of geothermal projects by reducing the total drilling costs considerably. Substantiated assumptions on uniaxial compressive strengths and failure criteria are important to avoid borehole instabilities and adapt the drilling plan to rock mechanical conditions to minimise non-productive time. Because core material is rare we aim at predicting in situ rock properties from outcrop analogue samples which are easy and cheap to provide. The comparability of properties determined from analogue samples with samples from depths is analysed by performing physical characterisation (P-wave velocities, densities), conventional triaxial tests, and uniaxial compressive strength tests of both quarry and equivalent core samples. "Equivalent" means that the quarry sample is of the same stratigraphic age and of comparable sedimentary facies and composition as the correspondent core sample. We determined the parameters uniaxial compressive strength (UCS) and Young's modulus for 35 rock samples from quarries and 14 equivalent core samples from the North German Basin. A subgroup of these samples was used for triaxial tests. For UCS versus Young's modulus, density and P-wave velocity, linear- and non-linear regression analyses were performed. We repeated regression separately for clastic rock samples or carbonate rock samples only as well as for quarry samples or core samples only. Empirical relations were used to calculate UCS values from existing logs of sampled wellbore. Calculated UCS values were then compared with measured UCS of core samples of the same wellbore. With triaxial tests we determined linearized Mohr-Coulomb failure criteria, expressed in both principal stresses and shear and normal stresses, for quarry samples. Comparison with samples from larger depths shows that it is possible to apply the obtained principal stress failure criteria to clastic and volcanic rocks, but less so for carbonates. Carbonate core samples have higher

  18. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  19. Evaluation of different sized blood sampling tubes for thromboelastometry, platelet function, and platelet count

    DEFF Research Database (Denmark)

    Andreasen, Jo Bønding; Pistor-Riebold, Thea Unger; Knudsen, Ingrid Hell;

    2014-01-01

    Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses and compa...

  20. Analysis of variograms with various sample sizes from a multispectral image

    Science.gov (United States)

    Variogram plays a crucial role in remote sensing application and geostatistics. It is very important to estimate variogram reliably from sufficient data. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100x100-pixel subset was chosen from ...

  1. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    Science.gov (United States)

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  2. Sample Size Requirements in Single- and Multiphase Growth Mixture Models: A Monte Carlo Simulation Study

    Science.gov (United States)

    Kim, Su-Young

    2012-01-01

    Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…

  3. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    Science.gov (United States)

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  4. The Influence of Virtual Sample Size on Confidence and Causal-Strength Judgments

    Science.gov (United States)

    Liljeholm, Mimi; Cheng, Patricia W.

    2009-01-01

    The authors investigated whether confidence in causal judgments varies with virtual sample size--the frequency of cases in which the outcome is (a) absent before the introduction of a generative cause or (b) present before the introduction of a preventive cause. Participants were asked to evaluate the influence of various candidate causes on an…

  5. Analysis of variograms with various sample sizes from a multispectral image

    Science.gov (United States)

    Variograms play a crucial role in remote sensing application and geostatistics. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100 X 100 pixel subset was chosen from an aerial multispectral image which contained three wavebands, green, ...

  6. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    Science.gov (United States)

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  7. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  8. Effect of sample size on intermetallic Al2Cu microstructure and orientation evolution during directional solidification

    Science.gov (United States)

    Gao, Ka; Li, Shuangming; Xu, Lei; Fu, Hengzhi

    2014-05-01

    Al-40% Cu hypereutectic alloy samples were successfully directionally solidified at a growth rate of 10 μm/s in different sizes (4 mm, 1.8 mm, and 0.45 mm thickness in transverse section). Using the serial sectioning technique, the three-dimensional (3D) microstructures of the primary intermetallic Al2Cu phase of the alloy can be observed with various growth patterns, L-shape, E-shape, and regular rectangular shape with respect to growth orientations of the (110) and (310) plane. The L-shape and regular rectangular shape of Al2Cu phase are bounded by {110} facets. When the sample size was reduced from 4 mm to 0.45 mm, the solidified microstructures changed from multi-layer dendrites to single-layer dendrite along the growth direction, and then the orientation texture was at the plane (310). The growth mechanism for the regular faceted intermetallic Al2Cu at different sample sizes was interpreted by the oriented attachment mechanism (OA). The experimental results showed that the directionally solidified Al-40% Cu alloy sample in a much smaller size can achieve a well-aligned morphology with a specific growth texture.

  9. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249. ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.413, year: 2014

  10. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM2-10'' and PM2'' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM10 (sum of PM2-10 and PM2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m3 in Tokyo, and 0.022mg/m3 in Sakata. (author)

  11. Forestry inventory based on multistage sampling with probability proportional to size

    Science.gov (United States)

    Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.

    1983-01-01

    A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.

  12. Dose calculation for 40K ingestion in samples of beans using spectrometry and MCNP

    International Nuclear Information System (INIS)

    A method based on gamma spectroscopy and on the use of voxel phantoms to calculate dose due to ingestion of 40K contained in bean samples are presented in this work. To quantify the activity of radionuclide, HPGe detector was used and the data entered in the input file of MCNP code. The highest value of equivalent dose was 7.83 μSv.y-1 in the stomach for white beans, whose activity 452.4 Bq.Kg-1 was the highest of the five analyzed. The tool proved to be appropriate when you want to calculate the dose in organs due to ingestion of food. (author)

  13. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  14. Performance of a reciprocal shaker in mechanical dispersion of soil samples for particle-size analysis

    Directory of Open Access Journals (Sweden)

    Thayse Aparecida Dourado

    2012-08-01

    Full Text Available The dispersion of the samples in soil particle-size analysis is a fundamental step, which is commonly achieved with a combination of chemical agents and mechanical agitation. The purpose of this study was to evaluate the efficiency of a low-speed reciprocal shaker for the mechanical dispersion of soil samples of different textural classes. The particle size of 61 soil samples was analyzed in four replications, using the pipette method to determine the clay fraction and sieving to determine coarse, fine and total sand fractions. The silt content was obtained by difference. To evaluate the performance, the results of the reciprocal shaker (RSh were compared with data of the same soil samples available in reports of the Proficiency testing for Soil Analysis Laboratories of the Agronomic Institute of Campinas (Prolab/IAC. The accuracy was analyzed based on the maximum and minimum values defining the confidence intervals for the particle-size fractions of each soil sample. Graphical indicators were also used for data comparison, based on dispersion and linear adjustment. The descriptive statistics indicated predominantly low variability in more than 90 % of the results for sand, medium-textured and clay samples, and for 68 % of the results for heavy clay samples, indicating satisfactory repeatability of measurements with the RSh. Medium variability was frequently associated with silt, followed by the fine sand fraction. The sensitivity analyses indicated an accuracy of 100 % for the three main separates (total sand, silt and clay, in all 52 samples of the textural classes heavy clay, clay and medium. For the nine sand soil samples, the average accuracy was 85.2 %; highest deviations were observed for the silt fraction. In relation to the linear adjustments, the correlation coefficients of 0.93 (silt or > 0.93 (total sand and clay, as well as the differences between the angular coefficients and the unit < 0.16, indicated a high correlation between the

  15. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    International Nuclear Information System (INIS)

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced

  16. MUDMASTER: A Program for Calculating Crystalline Size Distributions and Strain from the Shapes of X-Ray Diffraction Peaks

    Science.gov (United States)

    Eberl, D.D.; Drits, V.A.; Srodon, Jan; Nuesch, R.

    1996-01-01

    Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.

  17. The Procalcitonin And Survival Study (PASS – A Randomised multi-center investigator-initiated trial to investigate whether daily measurements biomarker Procalcitonin and pro-active diagnostic and therapeutic responses to abnormal Procalcitonin levels, can improve survival in intensive care unit patients. Calculated sample size (target population: 1000 patients

    Directory of Open Access Journals (Sweden)

    Fjeldborg Paul

    2008-07-01

    Full Text Available Abstract Background Sepsis and complications to sepsis are major causes of mortality in critically ill patients. Rapid treatment of sepsis is of crucial importance for survival of patients. The infectious status of the critically ill patient is often difficult to assess because symptoms cannot be expressed and signs may present atypically. The established biological markers of inflammation (leucocytes, C-reactive protein may often be influenced by other parameters than infection, and may be unacceptably slowly released after progression of an infection. At the same time, lack of a relevant antimicrobial therapy in an early course of infection may be fatal for the patient. Specific and rapid markers of bacterial infection have been sought for use in these patients. Methods Multi-centre randomized controlled interventional trial. Powered for superiority and non-inferiority on all measured end points. Complies with, "Good Clinical Practice" (ICH-GCP Guideline (CPMP/ICH/135/95, Directive 2001/20/EC. Inclusion: 1 Age ≥ 18 years of age, 2 Admitted to the participating intensive care units, 3 Signed written informed consent. Exclusion: 1 Known hyper-bilirubinaemia. or hypertriglyceridaemia, 2 Likely that safety is compromised by blood sampling, 3 Pregnant or breast feeding. Computerized Randomisation: Two arms (1:1, n = 500 per arm: Arm 1: standard of care. Arm 2: standard of care and Procalcitonin guided diagnostics and treatment of infection. Primary Trial Objective: To address whether daily Procalcitonin measurements and immediate diagnostic and therapeutic response on day-to-day changes in procalcitonin can reduce the mortality of critically ill patients. Discussion For the first time ever, a mortality-endpoint, large scale randomized controlled trial with a biomarker-guided strategy compared to the best standard of care, is conducted in an Intensive care setting. Results will, with a high statistical power answer the question: Can the survival

  18. Calculation of the average radiological detriment of two samples from a breast screening programme

    International Nuclear Information System (INIS)

    In 1992 started in the Comunidad Valenciana the Breast Cancer Screening Programme. The programme is oriented to asymptomatic women between 45 and 65 years old, with two mammograms in each breast for the first time that participate and a simple one in later interventions. Between November of 2000 and March of 2001 was extracted a first sample of 100 woman records for all units of the programme. The data extracted in each sample were the kV-voltage, the X-ray tube load and the breast thickness and age of the woman exposed, used directly in dose and detriment calculation. By means of MCNP-4B code and according to the European Protocol for the quality control of the physical and technical aspects of mammography screening, the average total and glandular doses were calculated, and later compared

  19. Effect of mesh grid size on the accuracy of deterministic VVER-1000 core calculations

    International Nuclear Information System (INIS)

    Research highlights: → Accuracy of changing mesh grid size in deterministic core calculations was investigated. → WIMS and CITATION codes were used in the investigation. → The best results belong to higher numbers of mesh points in radial and axial directions of the core. - Abstract: Numerical solutions based on finite-difference method require the domain in the problem to be divided into a number of nodes in the form of triangles, rectangular, and so on. To apply the finite-difference method in reactor physics for solving the diffusion equation with satisfactory accuracy, the distance between adjacent mesh-points should be small in comparison with a neutron mean free path. In this regard the effect of number of mesh points on the accuracy and computation time have been investigated using the VVER-1000 reactor of Bushehr NPP as an example, and utilizing WIMS and CITATION codes. The best results obtained in this study belong to meshing models with higher numbers of mesh-points in both radial and axial directions of the reactor core.

  20. Code Betal to calculation Alpha/Beta activities in environmental samples

    International Nuclear Information System (INIS)

    A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs

  1. Calculating henry adsorption constants of molecular hydrogen at 77 K on alumophosphate zeolites with different microchannel sizes

    Science.gov (United States)

    Grenev, I. V.; Gavrilov, V. Yu.

    2014-01-01

    Adsorption isotherms of molecular hydrogen are measured at 77 K in a series of AlPO alumophosphate zeolites with different microchannel sizes. The potential of the intermolecular interaction of H2 is calculated within the model of a cylindrical channel of variable size. Henry constants are calculated for this model for arbitrary orientations of the adsorbate molecules in microchannels. The experimental and calculated values of the Henry adsorption constant of H2 are compared at 77 K on AlPO zeolites. The constants of intermolecular interaction are determined for the H2-AlPO system.

  2. The effect of sample size and disease prevalence on supervised machine learning of narrative data.

    OpenAIRE

    McKnight, Lawrence K.; Wilcox, Adam; Hripcsak, George

    2002-01-01

    This paper examines the independent effects of outcome prevalence and training sample sizes on inductive learning performance. We trained 3 inductive learning algorithms (MC4, IB, and Naïve-Bayes) on 60 simulated datasets of parsed radiology text reports labeled with 6 disease states. Data sets were constructed to define positive outcome states at 4 prevalence rates (1, 5, 10, 25, and 50%) in training set sizes of 200 and 2,000 cases. We found that the effect of outcome prevalence is signific...

  3. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    different sampling units on species richness estimations. 2.  Estimated species richness scores depended both on the estimator considered and on the grain size used to aggregate data. However, several estimators (ACE, Chao1, Jackknife1 and 2 and Bootstrap) were precise in spite of grain variations. Weibull...... that species richness estimations coming from small grain sizes can be directly compared and other estimators could give more precise results in those cases. We propose a decision framework based on our results and on the literature to assess which estimator should be used to compare species richness...

  4. Effect of sample size on the fluid flow through a single fractured granitoid

    Institute of Scientific and Technical Information of China (English)

    Kunal Kumar Singh; Devendra Narain Singh; Ranjith Pathegama Gamage

    2016-01-01

    Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stressepermeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, s3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cy-lindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters), containing a“rough walled single fracture”. These experiments were performed under varied confining pressure (s3 ¼ 5e40 MPa), fluid pressure (fp ? 25 MPa), and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, sef ., and Q decreases with an increase in sef .. Also, the effects of sample size and fracture roughness do not persist when sef . ? 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory scale investigations to in-situ scale and

  5. Calculation of HPGe efficiency for environmental samples: comparison of EFFTRAN and GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Nikolic, Jelena, E-mail: jnikolic@vinca.rs [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia); Vidmar, Tim [SCK.CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400 Mol (Belgium); Jokovic, Dejan [University of Belgrade, Institute for Physics, Pregrevica 18, Belgrade (Serbia); Rajacic, Milica; Todorovic, Dragana [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia)

    2014-11-01

    Determination of full energy peak efficiency is one of the most important tasks that have to be performed before gamma spectrometry of environmental samples. Many methods, including measurement of specific reference materials, Monte Carlo simulations, efficiency transfer and semi empirical calculations, were developed in order to complete this task. Monte Carlo simulation, based on GEANT4 simulation package and EFFTRAN efficiency transfer software are applied for the efficiency calibration of three detectors, readily used in the Environment and Radiation Protection Laboratory of Institute for Nuclear Sciences Vinca, for measurement of environmental samples. Efficiencies were calculated for water, soil and aerosol samples. The aim of this paper is to perform efficiency calculations for HPGe detectors using both GEANT4 simulation and EFFTRAN efficiency transfer software and to compare obtained results with the experimental results. This comparison should show how the two methods agree with experimentally obtained efficiencies of our measurement system and in which part of the spectrum do the discrepancies appear. The detailed knowledge of accuracy and precision of both methods should enable us to choose an appropriate method for each situation that is presented in our and other laboratories on a daily basis.

  6. Applicability of the cross section adjustment method based on random sampling technique for burnup calculation

    International Nuclear Information System (INIS)

    Applicability of the cross section adjustment method based on random sampling (RS) technique to burnup calculations is investigated. The cross section adjustment method is a technique for reduction of prediction uncertainties in reactor core analysis and has been widely applied to fast reactors. As a practical method, the cross section adjustment method based on RS technique is newly developed for application to light water reactors (LWRs). In this method, covariance among cross sections and neutronics parameters are statistically estimated by the RS technique and cross sections are adjusted without calculation of sensitivity coefficients of neutronics parameters, which are necessary in the conventional cross section adjustment method. Since sensitivity coefficients are not used, the RS-based method is expected to be practically applied to LWR core analysis, in which considerable computational costs are required for estimation of sensitivity coefficients. Through a simple pin-cell burnup calculation, applicability of the present method to burnup calculations is investigated. The calculation results indicate that the present method can adequately adjust cross sections including burnup characteristics. (author)

  7. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  8. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  9. Sample size for estimating the mean concentration of organisms in ballast water.

    Science.gov (United States)

    Costa, Eliardo G; Lopes, Rubens M; Singer, Julio M

    2016-09-15

    We consider the computation of sample sizes for estimating the mean concentration of organisms in ballast water. Given the possible heterogeneity of their distribution in the tank, we adopt a negative binomial model to obtain confidence intervals for the mean concentration. We show that the results obtained by Chen and Chen (2012) in a different set-up hold for the proposed model and use them to develop algorithms to compute sample sizes both in cases where the mean concentration is known to lie in some bounded interval or where there is no information about its range. We also construct simple diagrams that may be easily employed to decide for compliance with the D-2 regulation of the International Maritime Organization (IMO). PMID:27266648

  10. Grain size analysis and high frequency electrical properties of Apollo 15 and 16 samples

    Science.gov (United States)

    Gold, T.; Bilson, E.; Yerbury, M.

    1973-01-01

    The particle size distribution of eleven surface fines samples collected by Apollo 15 and 16 was determined by the method of measuring the sedimentation rate in a column of water. The fact that the grain size distribution in the core samples shows significant differences within a few centimeters variation of depth is important for the understanding of the surface transportation processes which are responsible for the deposition of thin layers of different physical and/or chemical origin. The variation with density of the absorption length is plotted, and results would indicate that for the case of meter wavelength radar waves, reflections from depths of more than 100 meters generally contribute significantly to the radar echoes obtained.

  11. Calculation of reactor kinetics parameters βeff and Λ with Monte Carlo differential operator sampling

    International Nuclear Information System (INIS)

    The methods to calculate the kinetics parameters βeff (effective delayed neutron fraction) and Λ (neutron generation time) with the differential operator sampling have been reviewed. The comparison of the results obtained with the differential operator sampling and iterated fission probability approaches has been performed. It is shown that the differential operator sampling approach gives the same results as the iterated fission probability approach within the statistical uncertainty. In addition, the prediction accuracy of the evaluated nuclear data library JENDL-4.0 for the measured Βeff/Λ and βeff values is also examined. It is shown that JENDL-4.0 gives a good prediction except for the uranium-233 systems. The present results imply the need for revisiting the uranium-233 nuclear data evaluation and performing the detailed sensitivity analysis. (author)

  12. An Analytical Calculation Of Gamma Ray Self Attenuation Correction In Bulk Samples

    International Nuclear Information System (INIS)

    In this study, a modified point-like detector model was assumed and a computer program was developed to support the computation of self absorption correction factor (CS) by the Debertin’s method for samples in cylindrical geometries. The input data were sample dimensions, density and the mass attenuation coefficient. Utilizing this computer program, the self absorption correction factor CS (E,ρ) was obtained for the applied geometries, and a SiO2 matrix was used as routine measurements because the SiO2 matrix is widely encountered in environmental spectrometry. The obtained values of the calculated self absorption correction factor using the suggested model were in fair agreement with other experimental values for samples of other matrices.

  13. Quantification of Errors in Ordinal Outcome Scales Using Shannon Entropy: Effect on Sample Size Calculations

    OpenAIRE

    Mandava, Pitchaiah; Krumpelman, Chase S.; Shah, Jharna N.; White, Donna L.; Kent, Thomas A.

    2013-01-01

    Objective Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores (“Shift”) is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by ...

  14. Empirical Power and Sample Size Calculations for Cluster-Randomized and Cluster-Randomized Crossover Studies

    OpenAIRE

    Reich, Nicholas G.; Myers, Jessica A.; Obeng, Daniel; Milstone, Aaron M.; Perl, Trish M.

    2012-01-01

    In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of c...

  15. Precision estimates and suggested sample sizes for length-frequency data

    OpenAIRE

    Gerritsen, Hans D.; McGrath, David

    2007-01-01

    For most fisheries applications, the shape of a length-frequency distribution is much more important than its mean length or variance. This makes it difficult to evaluate at which point a sample size is adequate. By estimating the coefficient of variation of the counts in each length class and taking a weighted mean of these, a measure of precision was obtained that takes the precision in all length classes into account. The precision estimates were closely associated with the ratio of the...

  16. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    OpenAIRE

    Bacchetti, Peter; Steven G Deeks; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows th...

  17. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    OpenAIRE

    Zhihua Wang; Yongbo Zhang; Huimin Fu

    2014-01-01

    Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR) prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each predictio...

  18. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402

  19. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  20. Sub-sampling genetic data to estimate black bear population size: A case study

    Science.gov (United States)

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  1. Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes

    Science.gov (United States)

    Crowell, J. "; Gosnold, W. D.

    2012-12-01

    Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface

  2. Determination of appropriate grid dimension and sampling plot size for assessment of woody species diversity in Zagros Forest, Iran

    OpenAIRE

    ALI ASGHAR ZOHREVANDI; HASSAN POURBABAEI; REZA AKHAVAN; AMIR ESLAM BONYAD

    2016-01-01

    Abstract. Zohrevandi AA, Pourbabaei H, Akhavan R, Bonyad AE. 2015. Determination of appropriate grid dimension and sampling plot size for assessment of woody species diversity in Zagros Forest, Iran. Biodiversitas 17: 24-30. This research was conducted to determine the most suitable grid (dimensions for sampling) and sampling plot size for assessment of woody species diversity in protected Zagros forests, west of Iran. Sampling was carried out using circular sample plots with areas of 1000 m2...

  3. Paper coatings with multi-scale roughness evaluated at different sampling sizes

    International Nuclear Information System (INIS)

    Papers have a complex hierarchical structure and the end-user functionalities such as hydrophobicity are controlled by a finishing layer. The application of an organic nanoparticle coating and drying of the aqueous dispersion results in an unique surface morphology with microscale domains that are internally patterned with nanoparticles. Better understanding of the multi-scale surface roughness patterns is obtained by monitoring the topography with non-contact profilometry (NCP) and atomic force microscopy (AFM) at different sampling areas ranging from 2000 μm x 2000 μm to 0.5 μm x 0.5 μm. The statistical roughness parameters are uniquely related to each other over the different measuring techniques and sampling sizes, as they are purely statistically determined. However, they cannot be directly extrapolated over the different sampling areas as they represent transitions at the nano-, micro-to-nano and microscale level. Therefore, the spatial roughness parameters including the correlation length and the specific frequency bandwidth should be taken into account for each measurement, which both allow for direct correlation of roughness data at different sampling sizes.

  4. Enzymatic Kinetic Isotope Effects from First-Principles Path Sampling Calculations.

    Science.gov (United States)

    Varga, Matthew J; Schwartz, Steven D

    2016-04-12

    In this study, we develop and test a method to determine the rate of particle transfer and kinetic isotope effects in enzymatic reactions, specifically yeast alcohol dehydrogenase (YADH), from first-principles. Transition path sampling (TPS) and normal mode centroid dynamics (CMD) are used to simulate these enzymatic reactions without knowledge of their reaction coordinates and with the inclusion of quantum effects, such as zero-point energy and tunneling, on the transferring particle. Though previous studies have used TPS to calculate reaction rate constants in various model and real systems, it has not been applied to a system as large as YADH. The calculated primary H/D kinetic isotope effect agrees with previously reported experimental results, within experimental error. The kinetic isotope effects calculated with this method correspond to the kinetic isotope effect of the transfer event itself. The results reported here show that the kinetic isotope effects calculated from first-principles, purely for barrier passage, can be used to predict experimental kinetic isotope effects in enzymatic systems. PMID:26949835

  5. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  6. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    Science.gov (United States)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  7. Oscillatory reaction cross sections caused by normal mode sampling in quasiclassical trajectory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, Tibor; Vikár, Anna; Lendvay, György, E-mail: lendvay.gyorgy@ttk.mta.hu [Institute of Materials and Environmental Chemistry, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok körútja 2., H-1117 Budapest (Hungary)

    2016-01-07

    The quasiclassical trajectory (QCT) method is an efficient and important tool for studying the dynamics of bimolecular reactions. In this method, the motion of the atoms is simulated classically, and the only quantum effect considered is that the initial vibrational states of reactant molecules are semiclassically quantized. A sensible expectation is that the initial ensemble of classical molecular states generated this way should be stationary, similarly to the quantum state it is supposed to represent. The most widely used method for sampling the vibrational phase space of polyatomic molecules is based on the normal mode approximation. In the present work, it is demonstrated that normal mode sampling provides a nonstationary ensemble even for a simple molecule like methane, because real potential energy surfaces are anharmonic in the reactant domain. The consequences were investigated for reaction CH{sub 4} + H → CH{sub 3} + H{sub 2} and its various isotopologs and were found to be dramatic. Reaction probabilities and cross sections obtained from QCT calculations oscillate periodically as a function of the initial distance of the colliding partners and the excitation functions are erratic. The reason is that in the nonstationary ensemble of initial states, the mean bond length of the breaking C–H bond oscillates in time with the frequency of the symmetric stretch mode. We propose a simple method, one-period averaging, in which reactivity parameters are calculated by averaging over an entire period of the mean C–H bond length oscillation, which removes the observed artifacts and provides the physically most reasonable reaction probabilities and cross sections when the initial conditions for QCT calculations are generated by normal mode sampling.

  8. Oscillatory reaction cross sections caused by normal mode sampling in quasiclassical trajectory calculations.

    Science.gov (United States)

    Nagy, Tibor; Vikár, Anna; Lendvay, György

    2016-01-01

    The quasiclassical trajectory (QCT) method is an efficient and important tool for studying the dynamics of bimolecular reactions. In this method, the motion of the atoms is simulated classically, and the only quantum effect considered is that the initial vibrational states of reactant molecules are semiclassically quantized. A sensible expectation is that the initial ensemble of classical molecular states generated this way should be stationary, similarly to the quantum state it is supposed to represent. The most widely used method for sampling the vibrational phase space of polyatomic molecules is based on the normal mode approximation. In the present work, it is demonstrated that normal mode sampling provides a nonstationary ensemble even for a simple molecule like methane, because real potential energy surfaces are anharmonic in the reactant domain. The consequences were investigated for reaction CH4 + H → CH3 + H2 and its various isotopologs and were found to be dramatic. Reaction probabilities and cross sections obtained from QCT calculations oscillate periodically as a function of the initial distance of the colliding partners and the excitation functions are erratic. The reason is that in the nonstationary ensemble of initial states, the mean bond length of the breaking C-H bond oscillates in time with the frequency of the symmetric stretch mode. We propose a simple method, one-period averaging, in which reactivity parameters are calculated by averaging over an entire period of the mean C-H bond length oscillation, which removes the observed artifacts and provides the physically most reasonable reaction probabilities and cross sections when the initial conditions for QCT calculations are generated by normal mode sampling. PMID:26747798

  9. Oscillatory reaction cross sections caused by normal mode sampling in quasiclassical trajectory calculations

    International Nuclear Information System (INIS)

    The quasiclassical trajectory (QCT) method is an efficient and important tool for studying the dynamics of bimolecular reactions. In this method, the motion of the atoms is simulated classically, and the only quantum effect considered is that the initial vibrational states of reactant molecules are semiclassically quantized. A sensible expectation is that the initial ensemble of classical molecular states generated this way should be stationary, similarly to the quantum state it is supposed to represent. The most widely used method for sampling the vibrational phase space of polyatomic molecules is based on the normal mode approximation. In the present work, it is demonstrated that normal mode sampling provides a nonstationary ensemble even for a simple molecule like methane, because real potential energy surfaces are anharmonic in the reactant domain. The consequences were investigated for reaction CH4 + H → CH3 + H2 and its various isotopologs and were found to be dramatic. Reaction probabilities and cross sections obtained from QCT calculations oscillate periodically as a function of the initial distance of the colliding partners and the excitation functions are erratic. The reason is that in the nonstationary ensemble of initial states, the mean bond length of the breaking C–H bond oscillates in time with the frequency of the symmetric stretch mode. We propose a simple method, one-period averaging, in which reactivity parameters are calculated by averaging over an entire period of the mean C–H bond length oscillation, which removes the observed artifacts and provides the physically most reasonable reaction probabilities and cross sections when the initial conditions for QCT calculations are generated by normal mode sampling

  10. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F. T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  11. Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations

    OpenAIRE

    Guillermo Macbeth; Eugenia Razumiejczyk; Rubén Daniel Ledesma

    2011-01-01

    The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calcul...

  12. Effect of sample size on the fluid flow through a single fractured granitoid

    Directory of Open Access Journals (Sweden)

    Kunal Kumar Singh

    2016-06-01

    Full Text Available Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stress–permeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, σ3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cylindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters, containing a “rough walled single fracture”. These experiments were performed under varied confining pressure (σ3 = 5–40 MPa, fluid pressure (fp ≤ 25 MPa, and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, σeff., and Q decreases with an increase in σeff.. Also, the effects of sample size and fracture roughness do not persist when σeff. ≥ 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory

  13. GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    OpenAIRE

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework. Dosimetric evaluations against Monte Carlo dose calculations are conducted on 10 IMRT treatment plans (5 head-and-neck cases and 5 lung cases). For all cases, there i...

  14. Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force

    Energy Technology Data Exchange (ETDEWEB)

    Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R

    2008-05-22

    We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.

  15. Separability tests for high-dimensional, low sample size multivariate repeated measures data.

    Science.gov (United States)

    Simpson, Sean L; Edwards, Lloyd J; Styner, Martin A; Muller, Keith E

    2014-01-01

    Longitudinal imaging studies have moved to the forefront of medical research due to their ability to characterize spatio-temporal features of biological structures across the lifespan. Valid inference in longitudinal imaging requires enough flexibility of the covariance model to allow reasonable fidelity to the true pattern. On the other hand, the existence of computable estimates demands a parsimonious parameterization of the covariance structure. Separable (Kronecker product) covariance models provide one such parameterization in which the spatial and temporal covariances are modeled separately. However, evaluating the validity of this parameterization in high-dimensions remains a challenge. Here we provide a scientifically informed approach to assessing the adequacy of separable (Kronecker product) covariance models when the number of observations is large relative to the number of independent sampling units (sample size). We address both the general case, in which unstructured matrices are considered for each covariance model, and the structured case, which assumes a particular structure for each model. For the structured case, we focus on the situation where the within subject correlation is believed to decrease exponentially in time and space as is common in longitudinal imaging studies. However, the provided framework equally applies to all covariance patterns used within the more general multivariate repeated measures context. Our approach provides useful guidance for high dimension, low sample size data that preclude using standard likelihood based tests. Longitudinal medical imaging data of caudate morphology in schizophrenia illustrates the approaches appeal. PMID:25342869

  16. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  17. Calculation of depleted uranium concentration in dental fillings samples using the nuclear track detector CR-39

    International Nuclear Information System (INIS)

    The purpose of this study is to determine the concentration of depleted uranium in dental fillings samples, which were obtained some hospital and dental office, sale of materials deployed in Iraq. 8 samples were examined from two different fillings and lead-filling (amalgam) and composite filling (plastic). concentrations of depleted uranium were determined in these samples using a nuclear track detector CR-39 through the recording of the tracks left by of fragments of fission resulting from the reaction 238U (n, f). The samples are bombarded by neutrons emitted from the neutron source (241Am-Be) with flux of ( 105 n. cm-2. s-1). The period of etching to show the track of fission fragments is 5 hours using NaOH solution with normalization (6.25N), and temperature (60 oC). Concentration of depleted uranium were calculated by comparison with standard samples. The result that obtained showed that the value of the weighted average for concentration of uranium in the samples fillings (5.54± 1.05) ppm lead to thr filling (amalgam) and (5.33±0.6) ppm of the filling composite (plastic). The hazard- index, the absorbed dose and the effective dose for these concentration were determined. The obtained results of the effective dose for each of the surface of the bone and skin (as the areas most affected by this compensation industrial) is (0.56 mSv / y) for the batting lead (amalgam) and (0.54 mSv / y) for the filling composite (plastic). From the results of study it was that the highest rate is the effective dose to a specimen amalgam filling (0.68 mSv / y) which is less than the allowable limit for exposure of the general people set the World Health Organization (WHO), a (1 mSv / y). (Author)

  18. Burnup calculations and chemical analysis of irradiated fuel samples studied in LWR-PROTEUS phase II

    International Nuclear Information System (INIS)

    The isotopic compositions of 5 UO2 samples irradiated in a Swiss PWR power plant, which were investigated in the LWR-PROTEUS Phase II programme, were calculated using the CASMO-4 and BOXER assembly codes. The burnups of the samples range from 50 to 90 MWd/kg. The results for a large number of actinide and fission product nuclides were compared to those of chemical analyses performed using a combination of chromatographic separation and mass spectrometry. A good agreement of calculated and measured concentrations is found for many of the nuclides investigated with both codes. The concentrations of the Pu isotopes are mostly predicted within ±10%, the two codes giving quite different results, except for 242Pu. Relatively significant deviations are found for some isotopes of Cs and Sm, and large discrepancies are observed for Eu and Gd. The overall quality of the predictions by the two codes is comparable, and the deviations from the experimental data do not generally increase with burnup. (authors)

  19. Dealing with varying detection probability, unequal sample sizes and clumped distributions in count data.

    Directory of Open Access Journals (Sweden)

    D Johan Kotze

    Full Text Available Temporal variation in the detectability of a species can bias estimates of relative abundance if not handled correctly. For example, when effort varies in space and/or time it becomes necessary to take variation in detectability into account when data are analyzed. We demonstrate the importance of incorporating seasonality into the analysis of data with unequal sample sizes due to lost traps at a particular density of a species. A case study of count data was simulated using a spring-active carabid beetle. Traps were 'lost' randomly during high beetle activity in high abundance sites and during low beetle activity in low abundance sites. Five different models were fitted to datasets with different levels of loss. If sample sizes were unequal and a seasonality variable was not included in models that assumed the number of individuals was log-normally distributed, the models severely under- or overestimated the true effect size. Results did not improve when seasonality and number of trapping days were included in these models as offset terms, but only performed well when the response variable was specified as following a negative binomial distribution. Finally, if seasonal variation of a species is unknown, which is often the case, seasonality can be added as a free factor, resulting in well-performing negative binomial models. Based on these results we recommend (a add sampling effort (number of trapping days in our example to the models as an offset term, (b if precise information is available on seasonal variation in detectability of a study object, add seasonality to the models as an offset term; (c if information on seasonal variation in detectability is inadequate, add seasonality as a free factor; and (d specify the response variable of count data as following a negative binomial or over-dispersed Poisson distribution.

  20. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

    Energy Technology Data Exchange (ETDEWEB)

    Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)

    2010-01-15

    In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

  1. Dependence of diffusion theory results on the mesh size for fast reactor calculations

    International Nuclear Information System (INIS)

    In his investigations concerning sodium void reactivities for SNEAK-9C2-assemblies Ganesan detected inconsistencies in the results coming from Δk of successive diffusion and exact perturbation calculations, respectively. Therefore, in this study discretization and rounding errors in neutronic reactor calculations and their effects on numerical results are considered for a well known SNR-300 type benchmark problem as well as for the slightly simplified original problem. (orig./RW)

  2. Trend tests for case-control studies of genetic markers: power, sample size and robustness.

    Science.gov (United States)

    Freidlin, B; Zheng, G; Li, Z; Gastwirth, J L

    2002-01-01

    The Cochran-Armitage trend test is commonly used as a genotype-based test for candidate gene association. Corresponding to each underlying genetic model there is a particular set of scores assigned to the genotypes that maximizes its power. When the variance of the test statistic is known, the formulas for approximate power and associated sample size are readily obtained. In practice, however, the variance of the test statistic needs to be estimated. We present formulas for the required sample size to achieve a prespecified power that account for the need to estimate the variance of the test statistic. When the underlying genetic model is unknown one can incur a substantial loss of power when a test suitable for one mode of inheritance is used where another mode is the true one. Thus, tests having good power properties relative to the optimal tests for each model are useful. These tests are called efficiency robust and we study two of them: the maximin efficiency robust test is a linear combination of the standardized optimal tests that has high efficiency and the MAX test, the maximum of the standardized optimal tests. Simulation results of the robustness of these two tests indicate that the more computationally involved MAX test is preferable. PMID:12145550

  3. Particle size distribution of workplace aerosols in manganese alloy smelters applying a personal sampling strategy.

    Science.gov (United States)

    Berlinger, B; Bugge, M D; Ulvestad, B; Kjuus, H; Kandler, K; Ellingsen, D G

    2015-12-01

    Air samples were collected by personal sampling with five stage Sioutas cascade impactors and respirable cyclones in parallel among tappers and crane operators in two manganese (Mn) alloy smelters in Norway to investigate PM fractions. The mass concentrations of PM collected by using the impactors and the respirable cyclones were critically evaluated by comparing the results of the parallel measurements. The geometric mean (GM) mass concentrations of the respirable fraction and the <10 μm PM fraction were 0.18 and 0.39 mg m(-3), respectively. Particle size distributions were determined using the impactor data in the range from 0 to 10 μm and by stationary measurements by using a scanning mobility particle sizer in the range from 10 to 487 nm. On average 50% of the particulate mass in the Mn alloy smelters was in the range from 2.5 to 10 μm, while the rest was distributed between the lower stages of the impactors. On average 15% of the particulate mass was found in the <0.25 μm PM fraction. The comparisons of the different PM fraction mass concentrations related to different work tasks or different workplaces, showed in many cases statistically significant differences, however, the particle size distribution of PM in the fraction <10 μm d(ae) was independent of the plant, furnace or work task. PMID:26498986

  4. What can we learn from studies based on small sample sizes? Comment on Regan, Lakhanpal, and Anguiano (2012).

    Science.gov (United States)

    Johnson, David R; Bachan, Lauren K

    2013-08-01

    In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size (n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research. PMID:24340813

  5. [Application of PASS in sample size estimation of non-inferiority, equivalence and superiority design in clinical trials].

    Science.gov (United States)

    Wang, Y Y; Sun, R H

    2016-05-10

    The sample size of non-inferiority, equivalence and superiority design in clinical trial was estimated by using PASS 11 software. The result was compared with that by using SAS to evaluate the practicability and accuracy of PASS 11 software for the purpose of providing reference for sample size estimation in clinical trial design. PMID:27188375

  6. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However,…

  7. Review of Sample Size for Structural Equation Models in Second Language Testing and Learning Research: A Monte Carlo Approach

    Science.gov (United States)

    In'nami, Yo; Koizumi, Rie

    2013-01-01

    The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…

  8. Improving IRT Parameter Estimates with Small Sample Sizes: Evaluating the Efficacy of a New Data Augmentation Technique

    Science.gov (United States)

    Foley, Brett Patrick

    2010-01-01

    The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using…

  9. Approximate Sample Size Formulas for Testing Group Mean Differences when Variances Are Unequal in One-Way ANOVA

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2008-01-01

    This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…

  10. A practical guide to calculating Cohen’s f2, a measure of local effect size, from PROC MIXED

    Directory of Open Access Journals (Sweden)

    ArielleSSelya

    2012-04-01

    Full Text Available Reporting effect sizes in scientific articles is increasingly widespread and encouraged by journals; however, choosing an effect size for analyses such as mixed-effects regression modeling and hierarchical linear modeling can be difficult. One relatively uncommon, but very informative, standardized measure of effect size is Cohen’s f2, which allows an evaluation of local effect size, i.e. one variable’s effect size within the context of a multivariate regression model. Unfortunately, this measure is often not readily accessible from commonly used software for repeated-measures or hierarchical data analysis. In this guide, we illustrate how to extract Cohen’s f2 for two variables within a mixed-effects regression model using PROC MIXED in SAS ® software. Two examples of calculating Cohen’s f2 for different research questions are shown, using data from a longitudinal cohort study of smoking development in adolescents. This tutorial is designed to facilitate the calculation and reporting of effect sizes for single variables within mixed-effects multiple regression models, and is relevant for analysis of repeated-measures or hierarchical/multilevel data that are common in experimental psychology, observational research, and clinical or intervention studies.

  11. Sample sizes for the SF-6D preference based measure of health from the SF-36: a practical guide

    OpenAIRE

    Walters, SJ; Brazier, JE

    2002-01-01

    Background Health Related Quality of Life (HRQoL) measures are becoming more frequently used in clinical trials and health services research, both as primary and secondary endpoints. Investigators are now asking statisticians for advice on how to plan and analyse studies using HRQoL measures, which includes questions on sample size. Sample size requirements are critically dependent on the aims of the study, the outcome measure and its summary measure, the effect size and the method of cal...

  12. A Refined MCMC Sampling from RKHS for PAC-Bayes Bound Calculation

    Directory of Open Access Journals (Sweden)

    Li Tang

    2014-04-01

    Full Text Available PAC-Bayes risk bound integrating theories of Bayesian paradigm and structure risk minimization for stochastic classifiers has been considered as a framework for deriving some of the tightest generalization bounds. A major issue in practical use of this bound is estimations of unknown prior and posterior distributions of the concept space. In this paper, by formulating the concept space as Reproducing Kernel Hilbert Space (RKHS using the kernel method, we proposed a refined Markov Chain Monte Carlo (MCMC sampling algorithm by incorporating feedback information of the simulated model over training examples for simulating posterior distributions of the concept space. Furthermore, we used a kernel density method to estimate their probability distributions in calculating the Kullback-Leibler divergence of the posterior and prior distributions. The experimental results on two artificial data sets show that the simulation is reasonable and effective in practice.

  13. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  14. Sample size requirements and analysis of tag recoveries for paired releases of lake trout

    Science.gov (United States)

    Elrod, Joseph H.; Frank, Anthony

    1990-01-01

    A simple chi-square test can be used to analyze recoveries from a paired-release experiment to determine whether differential survival occurs between two groups of fish. The sample size required for analysis is a function of (1) the proportion of fish stocked, (2) the expected proportion at recovery, (3) the level of significance (a) at which the null hypothesis is tested, and (4) the power (1-I?) of the statistical test. Detection of a 20% change from a stocking ratio of 50:50 requires a sample of 172 (I?=0.10; 1-I?=0.80) to 459 (I?=0.01; 1-I?=0.95) fish. Pooling samples from replicate pairs is sometimes an appropriate way to increase statistical precision without increasing numbers stocked or sampling intensity. Summing over time is appropriate if catchability or survival of the two groups of fish does not change relative to each other through time. Twelve pairs of identical groups of yearling lake trout Salvelinus namaycush were marked with coded wire tags and stocked into Lake Ontario. Recoveries of fish at ages 2-8 showed differences of 1-14% from the initial stocking ratios. Mean tag recovery rates were 0.217%, 0.156%, 0.128%, 0.121%, 0.093%, 0.042%, and 0.016% for ages 2-8, respectively. At these rates, stocking 12,100-29,700 fish per group would yield samples of 172-459 fish at ages 2-8 combined.

  15. ITER poloidal field-full size joint sample: DC test results

    International Nuclear Information System (INIS)

    The ITER Poloidal Field-Full Size Joint Sample is a prototype sample dedicated to the tests of both conductor and joint relevant to the ITER Poloidal Field Coils. This sample, made of two straight conductor legs, connected at one end to each other, has been manufactured in industry and extensively tested in the SULTAN facility at Villigen, Switzerland. This paper is dedicated to a detailed analysis of the DC performances of the two legs, based on the same conductor layout, but with two different NbTi strands: a Nickel coated pure copper matrix strand from Europa Metalli (EM), and a bare strand from ALSTOM (AL), with internal cupronickel barrier. Critical currents, and/or quench currents, and current sharing temperatures have been measured in the range of interest for the ITER magnets, namely with a background field from 4 to 7 T, temperatures in the 4.5-7.5 K range, sample currents up to 80 kA. An 'instability' of this conductors, more pronounced in the EM leg despite the higher Cu:non-Cu ratio of the strand, has been evidenced, in agreement with other NbTi large conductors behavior. The effect of cycling on both critical and quench currents showed no significant evolution, as the test have been performed on virgin samples, and repeated after more than 500 electromagnetic cycles.The results allowed to evidence the features common to the two legs, peculiar to NbTi large CIC conductors, and the differences in the two legs performances, which could be attributed to a different current distribution among the main sub-cables (petals). (authors)

  16. Uranium dust concentration measured in a conversion plant by aerosol sampling and application for dose calculation

    International Nuclear Information System (INIS)

    COMURHEX is a plant for converting mining concentrates into UF4. The atmosphere in different facilities is monitored daily using aerosol sampling devices (APA) placed in selected locations depending upon the workstations used by the operators. The results, entered every day into a computer program, can be displayed on individual diagrams for each shop. This program allows urinary uranium analyses over a given threshold to be targeted in addition to the systematic analysis performed periodically. In 1996, 23 urinary analyses corresponding to six events exceeding APA guide values were investigated. A direct approximation of systematic contamination from measurement data has recently been described using a deconvolution of individual monitoring results. Uptakes calculated from urine analysis using this method are correlated with the increase of the APA values. This method implies that a specific monitoring protocol is developed by setting up a minimum number of urinary analyses in one year, a maximum interval between two examinations, considering the chemical composition of the components and the urinary level measurements. Internal dosimetry based only on APA values is not sufficient for operational medical monitoring. To reduce the uncertainties in dose calculation, a special program based on bioassay analysis initiated by the APA guide values is better adapted to estimating the internal dose to each worker in the different facilities of the plant. (author)

  17. Fundamental mode perturbation theory applied to reactor experiment calculations using small samples of fast multiplying media

    International Nuclear Information System (INIS)

    A new method for the interpretation of the reactivity experiments is presented. The method employed is the first order perturbation theory in the fondamental mode formalism. The method can be employed in the interpretation of absolute experiments (measurements of buckling, critical mass, etc) as well differential experiments. The diffusion and transport theories are parallely employed. The method is developed for different approximations of the scattering and for the geometries in one and two dimensions. The problem of the estimation of the first order approximation in the perturbation theory has been analysed. This analysis enables the definition of a new criterion which depends upon the quantities that one can calculates by means of the fondamental mode codes. Numerous problems of practical application of the method have been analysed: the elimination of parasitic effects (interface effects, streaming effects), the application of the multigroupe formalism instead of the exact formalism, etc. The method is applied to the interpretation of experiments of different types: sodium void in uranium and plutonium lattices, the substitution experiments of iron oxide (Fe2O3) and the oscillation experiments realised with plutonium samples rich in plutonium high isotopes. The use of the first order approximation enables to exploit the superposition principle; the differential effects (measured or/and calculated) may be added. This is a very interesting particularity for the interpretation as well for the practical realisation of experiments. It is shown that the class of experiments which are interpretable by this new method is very large

  18. Wave Optical Calculation of Probe Size in Low Energy Scanning Electron Microscope

    Czech Academy of Sciences Publication Activity Database

    Radlička, Tomáš

    2015-01-01

    Roč. 21, S4 (2015), s. 212-217. ISSN 1431-9276 R&D Projects: GA MŠk(CZ) LO1212 Institutional support: RVO:68081731 Keywords : scanning electron microscope * optical calculation Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.877, year: 2014

  19. What about N? A methodological study of sample-size reporting in focus group studies

    Directory of Open Access Journals (Sweden)

    Glenton Claire

    2011-03-01

    Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method

  20. Calculation of Droplet Size and Formation Time in Electrohydrodynamic Based Pulsatile Drug Delivery System

    CERN Document Server

    Zheng, Yi; Hu, Junqiang; Lin, Qiao

    2012-01-01

    Electrohydrodynamic (EHD) generation, a commonly used method in BioMEMS, plays a significant role in the pulsed-release drug delivery system for a decade. In this paper, an EHD based drug delivery system is well designed, which can be used to generate a single drug droplet as small as 2.83 nL in 8.5 ms with a total device of 2x2x3 mm^3, and an external supplied voltage of 1500 V. Theoretically, we derive the expressions for the size and the formation time of a droplet generated by EHD method, while taking into account the drug supply rate, properties of liquid, gap between electrodes, nozzle size, and charged droplet neutralization. This work proves a repeatable, stable and controllable droplet generation and delivery system based on EHD method.

  1. Deformation mechanisms in micron-sized PST TiAl compression samples: Experiment and model

    International Nuclear Information System (INIS)

    Highlights: → In situ micro-compression testing of lamellar TiAl crystals in a SEM. → Mechanical twinning and dislocation glide are analyzed by TEM. → A model is developed to describe the twin induced deformation. → The size of the mechanical twins is correlated with the width of the TiAl lamellae. - Abstract: Titanium aluminides are the most promising intermetallics for use in aerospace and automotive applications. Consequently, it is of fundamental interest to explore the deformation mechanisms occurring in this class of materials. One model material which is extensively used for such studies are polysynthetically twinned (PST) TiAl crystals, which consist predominantly of parallel γ-TiAl and, fewer, α2-Ti3Al lamellae. In the present study, PST TiAl crystals with a nominal composition of Ti-50 at.% Al were machined by means of the focused ion beam (FIB) technique into miniaturized compression samples with a square cross-section of approximately 9 μm x 9 μm. Compression tests on the miniaturized samples were performed in situ inside a scanning electron microscope using a microindenter equipped with a diamond flat punch. After deformation, thin foils were cut from the micro-compression samples and thinned to electron transparency using a FIB machine in order to study the deformation structure by transmission electron microscopy (TEM). The TEM studies reveal mechanical twinning as the main deformation mechanism at strains of 5.4%, while at strains of 8.3% dislocation glide becomes increasingly important. The experimentally observed twins scale in size with the width of the γ-TiAl lamella. A kinematic and thermodynamic model is developed to describe the twin-related length change of the micro-compression sample at small strains as well as the relationship of an increase of twin width with increasing γ-TiAl lamella thickness. The developed twin model predicts a width of the twins in the range of a few nanometers, which is in agreement with experimental

  2. Sample size for logistic regression with small response probability. Technical report No. 33

    Energy Technology Data Exchange (ETDEWEB)

    Whittemore, A S

    1980-03-01

    The Fisher information matrix for the estimated parameters in a multiple logistic regression can be approximated by the augmented Hessian matrix of the moment generating function for the covariates. The approximation is valid when the probability of response is small. With its use one can obtain a simple closed-form estimate of the asymptotic covariance matrix of the maximum-likelihood parameter estimates, and thus approximate sample sizes needed to test hypotheses about the parameters. The method is developed for selected distributions of a single covariate, and for a class of exponential-type distributions of several covariates. It is illustrated with an example concerning risk factors for coronary heart disease. 2 figures, 2 tables.

  3. Sample Size Dependence of Second Magnetization Peak in Type-II Superconductors

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    We show that the second magnetization peak (SMP), i. e., an increase in the magnetization hysteresis loop width in type-II superconductors,vanishes for samples smaller than a critical size. We argue that the SMP is not related to the critical current enhancement but can be well explained within a framework of the thermomagnetic flux-jump instability theory, where flux jumps reduce the absolute irreversible magnetization relative to the isothermal critical state value at low enough magnetic fields. The recovering of the isothermal critical state with increasing field leads to the SMP. The low-field SMP takes place in both low-Tc conventional and high-Tc unconventional superconductors. Our results show that the restoration of the isothermal critical state is responsible for the SMP occurrence in both cases.

  4. Estimating survival rates in ecological studies with small unbalanced sample sizes: an alternative Bayesian point estimator

    Directory of Open Access Journals (Sweden)

    Christian Damgaard

    2011-12-01

    Full Text Available Increasingly, the survival rates in experimental ecology are presented using odds ratios or log response ratios, but the use of ratio metrics has a problem when all the individuals have either died or survived in only one replicate. In the empirical ecological literature, the problem often has been ignored or circumvented by different, more or less ad hoc approaches. Here, it is argued that the best summary statistic for communicating ecological results of frequency data in studies with small unbalanced samples may be the mean of the posterior distribution of the survival rate. The developed approach may be particularly useful when effect size indexes, such as odds ratios, are needed to compare frequency data between treatments, sites or studies.

  5. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    OpenAIRE

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009]. Dosimetric evaluations against Monte Carlo dose calculations are conducted on 10 IMRT treatment plans (5 head-and-neck c...

  6. Analysis of Pipe Size Influence on Pipeline Displacement with Plain Dent Based on FE Calculation

    Directory of Open Access Journals (Sweden)

    Ying Wu

    2013-01-01

    Full Text Available According to the report of the United States transportation department, mechanical damage is one of the most important reasons for the pipeline accident .The most typical form of mechanical damage is indentation. Dent defect is one of the important factors affecting pipeline fatigue life, and it will greatly reduce the fatigue life of the pipeline in service. Meanwhile the dent displacement will be changed with the operation pressure fluctuations of the in-service pipeline, resulting in a circular bending stress, which directly affect the pipeline fatigue life. For typical plain dent on pipeline, the finite element models were established under different circumstances. A large number of the calculation results were sorted and inducted. On this basis, the results were analyzed by univariate. Non-linear regression analysis was utilized to fit the results, some specific expressions of the relationship between the dented pipeline displacements and the diameter and wall thickness of pipeline are obtained after much calculation and analog.

  7. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.; Leontides, L.

    2013-01-01

    SUMMARY Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average...... measure of the data heterogeneity, has been used to modify formulae for individual sample size estimation. However, subgroups of animals sharing common characteristics, may exhibit excessively less or more heterogeneity. Hence, sample size estimates based on the ICC may not achieve the desired precision...... and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different...

  8. Separation and enrichment of trace ractopamine in biological samples by uniformly-sized molecularly imprinted polymers

    Institute of Scientific and Technical Information of China (English)

    Ya Li; Qiang Fua; Meng Liu; Yuan-Yuan Jiao; Wei Du; Chong Yu; Jing Liu; Chun Chang; Jian Lu

    2012-01-01

    In order to prepare a high capacity packing material for solid-phase extraction with specific recognition ability of trace ractopamine in biological samples, uniformly-sized, molecularly imprinted polymers (MIPs) were prepared by a multi-step swelling and polymerization method using methacrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker, and toluene as a porogen respectively. Scanning electron microscope and specific surface area were employed to identify the characteristics of MIPs. Ultraviolet spectroscopy, Fourier transform infrared spectroscopy, Scatchard analysis and kinetic study were performed to interpret the specific recognition ability and the binding process of MIPs. The results showed that, compared with other reports, MIPs synthetized in this study showed high adsorption capacity besides specific recognition ability. The adsorption capacity of MIPs was 0.063 mmol/g at 1 mmol/L ractopamine concentra- tion with the distribution coefficient 1.70. The resulting MIPs could be used as solid-phase extraction materials for separation and enrichment of trace ractopamine in biological samples.

  9. Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples

    Directory of Open Access Journals (Sweden)

    Inés Lozano-Ramos

    2015-05-01

    Full Text Available Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9. The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm–Horsfall protein were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.

  10. Sample size matters in dietary gene expression studies—A case study in the gilthead sea bream (Sparus aurata L.

    Directory of Open Access Journals (Sweden)

    Fotini Kokou

    2016-05-01

    Full Text Available One of the main concerns in gene expression studies is the calculation of statistical significance which in most cases remains low due to limited sample size. Increasing biological replicates translates into more effective gains in power which, especially in nutritional experiments, is of great importance as individual variation of growth performance parameters and feed conversion is high. The present study investigates in the gilthead sea bream Sparus aurata, one of the most important Mediterranean aquaculture species. For 24 gilthead sea bream individuals (biological replicates the effects of gradual substitution of fish meal by plant ingredients (0% (control, 25%, 50% and 75% in the diets were studied by looking at expression levels of four immune-and stress-related genes in intestine, head kidney and liver. The present results showed that only the lowest substitution percentage is tolerated and that liver is the most sensitive tissue to detect gene expression variations in relation to fish meal substituted diets. Additionally the usage of three independent biological replicates were evaluated by calculating the averages of all possible triplets in order to assess the suitability of selected genes for stress indication as well as the impact of the experimental set up, thus in the present work the impact of FM substitution. Gene expression was altered depending of the selected biological triplicate. Only for two genes in liver (hsp70 and tgf significant differential expression was assured independently of the triplicates used. These results underlined the importance of choosing the adequate sample number especially when significant, but minor differences in gene expression levels are observed.

  11. Module development for steam explosion load estimation-I. methodology and sample calculation

    International Nuclear Information System (INIS)

    A methodology has been suggestd to develop a module, which is able to estimate the steam explosion load under the integral code structure. As the first step of module development, TEXAS-V, which is a one-dimensional mechanistic code for steam explosion analysis, was selected and sample calculations were done. At this stage, the characteristics of the TEXAS-V code was identified and the analysis capability was setup. A sensitivity study was also performed on the uncertain code parameters such as mesh number, mesh cross-sectional area, mixing completion condition, and triggering magnitude. The melt jet with the diameter of 0.15m and the velocity of 9m/s was poured into the water at 1 atm, 363K, and 1.1 m depth during 0.74 sec. 197kg of melt was mixed with the water among the total of 947kg. The explosion peak pressure, propagation speed, and conversion ratio considering the mixed melt, were evaluated as 40MPa, 1500m/s, and 2%, respectively. The triggering magnitude did not show any effect on the explosion strength once the explosion would be started. The explosion violence was sensitive to the mesh number, mesh area, and mixing completion condition, mainly because the mixture condition is dependent upon these parameters. The additional study on these parameters needs to be done

  12. Calculation of the effective diffusion coefficient during the drying of clay samples

    OpenAIRE

    Vasić Miloš; Radojević Zagorka; Grbavčić Željko

    2012-01-01

    The aim of this study was to calculate the effective diffusion coefficient based on experimentally recorded drying curves for two masonry clays obtained from different localities. The calculation method and two computer programs based on the mathematical calculation of the second Fick’s law and Cranck diffusion equation were developed. Masonry product shrinkage during drying was taken into consideration for the first time and the appropriate correction was entered into the calculation. ...

  13. Amostragem para diagnose do estado nutricional de mangueiras Size of samples for nutritional status assessment of mango trees

    Directory of Open Access Journals (Sweden)

    Danilo Eduardo Rozane

    2007-08-01

    Full Text Available Dentre todas as etapas que permeiam um laudo foliar, ainda a amostragem continua sendo a mais sujeita a erros. O presente trabalho teve como objetivo determinar o tamanho de amostras foliares e a variação do erro amostral para coleta de folhas de pomares de mangueiras. O experimento contou com delineamento inteiramente casualizado, com seis repetições e quatro tratamentos, que constaram da coleta de uma folha, em cada uma das quatro posições cardeais, em 5; 10; 20 e 40 plantas. Com base nos resultados dos teores de nutrientes, foram calculados as médias, variâncias, erros-padrão das médias, o intervalo de confiança para a média e a porcentagem de erro em relação à média, através da semi-amplitude do intervalo de confiança expresso em porcentagem da média. Concluiu-se que, para as determinações químicas dos macronutrientes, 10 plantas de mangueira seriam suficientes, coletando-se uma folha nos quatro pontos cardeais da planta. Já para os micronutrientes, seriam necessárias, no mínimo, 20 plantas e, se considerarmos o Fe, seria necessário amostrar, pelo menos, 30 plantas.Of all the processes used in foliar evaluation, sampling is currently still the most subject to error. To corroborate this, the present study aimed to determine the size of foliar samples and error variation of samples collected from mango tree leaves. The study used a completely randomized design with six repetitions and four different treatments which consisted of a leaf from each of the cardinal points from 5, 10, 20 and 40 different trees. Mean variation, mean standard error, mean confidence interval and error percentage compared to mean values were calculated based on level of nutrients. Results were calculated by using the semi-amplitude of the interval of confidence expressed as mean percentages. Obtained results led to the conclusion that the chemical evaluation of macro-nutrients from 10 mango plants would be sufficient for assessment. However

  14. Bayesian Sample Size Determination of Vibration Signals in Machine Learning Approach to Fault Diagnosis of Roller Bearings

    OpenAIRE

    Sahu, Siddhant; V. SUGUMARAN

    2014-01-01

    Sample size determination for a data set is an important statistical process for analyzing the data to an optimum level of accuracy and using minimum computational work. The applications of this process are credible in every domain which deals with large data sets and high computational work. This study uses Bayesian analysis for determination of minimum sample size of vibration signals to be considered for fault diagnosis of a bearing using pre-defined parameters such as the inverse standard...

  15. Calculations on the size of a reinform satellite screen protecting the kidneys during standing field irradiation of the whole abdomen

    International Nuclear Information System (INIS)

    A method is presented, by means of which the necessary sizes of satellite screens can be estimated from i.v. urogram picture sizes of kidneys. Using 37 i.v. urograms mean values are ascertained for picture sizes of length, breadth, and plane of the kidneys. By means of CT-assisted measuring of position of the renal plane in the cross-section of the body during irradiation planning the direct coherence can be proven between organ position and body diameter. Thus it is possible to calculate the necessary sizes of satellite screens by means of the picture sizes of kidneys in the i.v. urogram and the body diameter. Using a standard screen, fixed according to mean values and standard deviations, it is possible to spare the kidneys sufficiently out of the effective pencil beam of rays. Positioning of screens was done by means of a therapy simulator, with an additional picture in abdominal position. By means of values of the angle between the axes of kidney and spine, of the distance of plane centre of gravity in kidney from spinal axis, and of the height of this point in regard of the longitudinal axis of the body, measured in the i.v. urogram, the necessary screen position can be localized under screening and marked at the patient in the usual way. This method represents an essential simplification of the hitherto existing procedure and improves the adjusting safety at the irradiation device. (author)

  16. Effect of Reiki therapy on pain and anxiety in adults: an in-depth literature review of randomized trials with effect size calculations.

    Science.gov (United States)

    Thrane, Susan; Cohen, Susan M

    2014-12-01

    The objective of this study was to calculate the effect of Reiki therapy for pain and anxiety in randomized clinical trials. A systematic search of PubMed, ProQuest, Cochrane, PsychInfo, CINAHL, Web of Science, Global Health, and Medline databases was conducted using the search terms pain, anxiety, and Reiki. The Center for Reiki Research also was examined for articles. Studies that used randomization and a control or usual care group, used Reiki therapy in one arm of the study, were published in 2000 or later in peer-reviewed journals in English, and measured pain or anxiety were included. After removing duplicates, 49 articles were examined and 12 articles received full review. Seven studies met the inclusion criteria: four articles studied cancer patients, one examined post-surgical patients, and two analyzed community dwelling older adults. Effect sizes were calculated for all studies using Cohen's d statistic. Effect sizes for within group differences ranged from d = 0.24 for decrease in anxiety in women undergoing breast biopsy to d = 2.08 for decreased pain in community dwelling adults. The between group differences ranged from d = 0.32 for decrease of pain in a Reiki versus rest intervention for cancer patients to d = 4.5 for decrease in pain in community dwelling adults. Although the number of studies is limited, based on the size Cohen's d statistics calculated in this review, there is evidence to suggest that Reiki therapy may be effective for pain and anxiety. Continued research using Reiki therapy with larger sample sizes, consistently randomized groups, and standardized treatment protocols is recommended. PMID:24582620

  17. Effect of size and heterogeneity of samples on biomarker discovery: synthetic and real data assessment.

    Directory of Open Access Journals (Sweden)

    Barbara Di Camillo

    Full Text Available MOTIVATION: The identification of robust lists of molecular biomarkers related to a disease is a fundamental step for early diagnosis and treatment. However, methodologies for the discovery of biomarkers using microarray data often provide results with limited overlap. These differences are imputable to 1 dataset size (few subjects with respect to the number of features; 2 heterogeneity of the disease; 3 heterogeneity of experimental protocols and computational pipelines employed in the analysis. In this paper, we focus on the first two issues and assess, both on simulated (through an in silico regulation network model and real clinical datasets, the consistency of candidate biomarkers provided by a number of different methods. METHODS: We extensively simulated the effect of heterogeneity characteristic of complex diseases on different sets of microarray data. Heterogeneity was reproduced by simulating both intrinsic variability of the population and the alteration of regulatory mechanisms. Population variability was simulated by modeling evolution of a pool of subjects; then, a subset of them underwent alterations in regulatory mechanisms so as to mimic the disease state. RESULTS: The simulated data allowed us to outline advantages and drawbacks of different methods across multiple studies and varying number of samples and to evaluate precision of feature selection on a benchmark with known biomarkers. Although comparable classification accuracy was reached by different methods, the use of external cross-validation loops is helpful in finding features with a higher degree of precision and stability. Application to real data confirmed these results.

  18. Sample size and sampling errors as the source of dispersion in chemical analyses. [for high-Ti lunar basalt

    Science.gov (United States)

    Clanton, U. S.; Fletcher, C. R.

    1976-01-01

    The paper describes a Monte Carlo model for simulation of two-dimensional representations of thin sections of some of the more common igneous rock textures. These representations are extrapolated to three dimensions to develop a volume of 'rock'. The model (here applied to a medium-grained high-Ti basalt) can be used to determine a statistically significant sample for a lunar rock or to predict the probable errors in the oxide contents that can occur during the analysis of a sample that is not representative of the parent rock.

  19. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  20. How to Calculate Range and Population Size for the Otter? The Irish Approach as a Case Study

    Directory of Open Access Journals (Sweden)

    Dierdre Lynn

    2011-01-01

    Full Text Available All EU Member States are obliged to submit reports to the EU Commission every 6 years, detailing the conservation status of species and habitats listed on the Habitats Directive. The otter (Lutra lutra is one such species. Despite a number of national surveys that showed that the otter was widespread across the country, in Ireland’s 2007 conservation status assessment the otter was considered to be in unfavourable condition. While the Range, Habitat and Future Prospects categories were all considered favourable, Population was deemed to be unfavourable.This paper examines the data behind the 2007 assessment by Ireland, which included three national otter surveys and a series of radio-tracking studies. Range was mapped and calculated based on the results of national distribution surveys together with records submitted from the public. Population size was estimated by calculating the extent of available habitats (rivers, lakes and coasts, dividing that by the typical home range size and then multiplying the result by the proportion of positive sites in the most recent national survey.While the Range of the otter in Ireland did not decrease between the 1980/81 and the 2004/05 surveys, Population trend was calculated as -23.7%. As a consequence, the most recent national Red Data List for Ireland lists the species as Near Threatened (Marnell et al., 2009.

  1. Relation between γ-ray detection efficiency and the source size of prepared aerosol filter sample

    International Nuclear Information System (INIS)

    By adding revolution of know activity, six disk sources with different diameters are prepared with the filter of 22.5 cm x 45 cm. Several HPGe detectors with different crystal dimensions are employed to detect the nuclides in the disk sources and the relative detection efficiencies are calculated from the experiments. The result indicates that the source with the highest efficiency has the diameter between 50 mm and 60 mm within the energy range from 60 keV to 1116 keV. It is also found that the diameter of optimum source increases slightly with the crystal diameter of detector. Besides, a relatively thicker source disk with smaller diameter is preferred for lower energy rays, compared with higher energy rays. But this tendency is not as obvious for planar HPGe detector as that for coaxial HPGe detector. For above samples, Monte Carlo Simulations for the measurement of one detector are carried out and the result accords with the experiments. Also, by Monte Carlo Simulating for other different area filters, it is known that the diameter of source with optimum efficiencies increases with the area of filter

  2. Family Configuration and Achievement: Effects of Birth Order and Family Size in a Sample of Brothers.

    Science.gov (United States)

    Olneck, Michael R.; Bills, David B.

    1979-01-01

    Birth order effects in brothers were found to derive from difference in family size. Effects for family size were found even with socioeconomic background controlled. Nor were family size effects explained by parental ability. The importance of unmeasured preferences or economic resources that vary across families was suggested. (Author/RD)

  3. Sampling hazelnuts for aflatoxin: Effects of sample size and accetp/reject limit on reducing risk of misclassifying lots

    Science.gov (United States)

    About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants (CCFAC) began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in alm...

  4. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    Science.gov (United States)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-09-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  5. Sample Size Planning for the Squared Multiple Correlation Coefficient: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    Science.gov (United States)

    Kelley, Ken

    2008-01-01

    Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…

  6. An improved model for the calculation of pore size distribution formation resistivity factor and permeability of reservoir rocks using cutting analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, A.; Chennakesavan, S. [Oklahoma Univ., Norman, OK (United States)

    2000-06-01

    Petrophysical properties, measured from cutting samples, form the basis for an improved model to determine pore size distribution, formation resistivity factor, and permeability of reservoir rocks. A network of pore segments in a lattice representing the porous medium represents the reservoir rock. The model used for the segments is triangular tubes of varying cross section, bounded by three or more ellipsoidal grains. The continuity of the network is assured by the segments being connected at nodes, similar to pore bodies. The authors presented a method for the measurement of pore size distribution. It is based on Mercury Injection Capillary Pressure Curves measured in the laboratory, jointly with a linear optimization technique and model predictions. The calculation of the mean electrical and hydraulic conductances of the network is effected using the effective medium theory, so as to simulate the formation resistivity factor and permeability of the rock sample. A comparison between the experimentally measured values and the predictions for the permeability was made, and found to agree. Sphericity of the grains and mean grain size have an influence on model predictions. The method was automated.

  7. Investigation of hydrophobic contaminants in an urban slough system using passive sampling - Insights from sampling rate calculations

    Science.gov (United States)

    McCarthy, K.

    2008-01-01

    Semipermeable membrane devices (SPMDs) were deployed in the Columbia Slough, near Portland, Oregon, on three separate occasions to measure the spatial and seasonal distribution of dissolved polycyclic aromatic hydrocarbons (PAHs) and organochlorine compounds (OCs) in the slough. Concentrations of PAHs and OCs in SPMDs showed spatial and seasonal differences among sites and indicated that unusually high flows in the spring of 2006 diluted the concentrations of many of the target contaminants. However, the same PAHs - pyrene, fluoranthene, and the alkylated homologues of phenanthrene, anthracene, and fluorene - and OCs - polychlorinated biphenyls, pentachloroanisole, chlorpyrifos, dieldrin, and the metabolites of dichlorodiphenyltrichloroethane (DDT) - predominated throughout the system during all three deployment periods. The data suggest that storm washoff may be a predominant source of PAHs in the slough but that OCs are ubiquitous, entering the slough by a variety of pathways. Comparison of SPMDs deployed on the stream bed with SPMDs deployed in the overlying water column suggests that even for the very hydrophobic compounds investigated, bed sediments may not be a predominant source in this system. Perdeuterated phenanthrene (phenanthrene-d10). spiked at a rate of 2 ??g per SPMD, was shown to be a reliable performance reference compound (PRC) under the conditions of these deployments. Post-deployment concentrations of the PRC revealed differences in sampling conditions among sites and between seasons, but indicate that for SPMDs deployed throughout the main slough channel, differences in sampling rates were small enough to make site-to-site comparisons of SPMD concentrations straightforward. ?? Springer Science+Business Media B.V. 2007.

  8. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    Science.gov (United States)

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.

    2011-06-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (~5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  9. Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part I. Pulsation Measurements

    OpenAIRE

    Lee, Eun Gyung; Lee, Larry; Möhlmann, Carsten; Flemmer, Michael M.; Kashon, Michael; Harper, Martin

    2013-01-01

    Pulsations generated by personal sampling pumps modulate the airflow through the sampling trains, thereby varying sampling efficiencies, and possibly invalidating collection or monitoring. The purpose of this study was to characterize pulsations generated by personal sampling pumps relative to a nominal flow rate at the inlet of different respirable cyclones. Experiments were conducted using a factorial combination of 13 widely used sampling pumps (11 medium and 2 high volumetric flow rate pu...

  10. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray, E-mail: liucr@ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32610-0385 (United States)

    2015-04-15

    Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm{sup 2} square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm{sup 2}, where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (

  11. Optimizing Stream Water Mercury Sampling for Calculation of Fish Bioaccumulation Factors

    Science.gov (United States)

    Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive ...

  12. Accurate measurement of sample conductivity in a diamond anvil cell with axis symmetrical electrodes and finite difference calculation

    Directory of Open Access Journals (Sweden)

    Jie Yang

    2011-09-01

    Full Text Available We report a relatively precise method of conductivity measurement in a diamond anvil cell with axis symmetrical electrodes and finite difference calculation. The axis symmetrical electrodes are composed of two parts: one is a round thin-film electrode deposited on diamond facet and the other is the inside wall of metal gasket. Due to the asymmetrical configuration of the two electrodes, finite difference method can be applied to calculate the conductivity of sample, which can reduce the measurement error.

  13. Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations

    Directory of Open Access Journals (Sweden)

    Guillermo Macbeth

    2011-05-01

    Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.

  14. Evaluation of 1H NMR relaxometry for the assessment of pore size distribution in soil samples

    NARCIS (Netherlands)

    Jaeger, F.; Bowe, S.; As, van H.; Schaumann, G.E.

    2009-01-01

    1H NMR relaxometry is used in earth science as a non-destructive and time-saving method to determine pore size distributions (PSD) in porous media with pore sizes ranging from nm to mm. This is a broader range than generally reported for results from X-ray computed tomography (X-ray CT) scanning, wh

  15. Does size matter? Separations on guard columns for fast sample analysis applied to bioenergy research

    OpenAIRE

    Bauer, Stefan; Ibanez, Ana B

    2015-01-01

    Background Increasing sample throughput is needed when large numbers of samples have to be processed. In chromatography, one strategy is to reduce column length for decreased analysis time. Therefore, the feasibility of analyzing samples simply on a guard column was explored using refractive index and ultraviolet detection. Results from the guard columns were compared to the analyses using the standard 300 mm Aminex HPX-87H column which is widely applied to the analysis of samples from many b...

  16. Multiscale sampling of plant diversity: Effects of minimum mapping unit size

    Science.gov (United States)

    Stohlgren, T.J.; Chong, G.W.; Kalkhan, M.A.; Schell, L.D.

    1997-01-01

    Only a small portion of any landscape can be sampled for vascular plant diversity because of constraints of cost (salaries, travel time between sites, etc.). Often, the investigator decides to reduce the cost of creating a vegetation map by increasing the minimum mapping unit (MMU), and/or by reducing the number of vegetation classes to be considered. Questions arise about what information is sacrificed when map resolution is decreased. We compared plant diversity patterns from vegetation maps made with 100-ha, 50-ha, 2-ha, and 0.02-ha MMUs in a 754-ha study area in Rocky Mountain National Park, Colorado, United States, using four 0.025-ha and 21 0.1-ha multiscale vegetation plots. We developed and tested species-log(area) curves, correcting the curves for within-vegetation type heterogeneity with Jaccard's coefficients. Total species richness in the study area was estimated from vegetation maps at each resolution (MMU), based on the corrected species-area curves, total area of the vegetation type, and species overlap among vegetation types. With the 0.02-ha MMU, six vegetation types were recovered, resulting in an estimated 552 species (95% CI = 520-583 species) in the 754-ha study area (330 plant species were observed in the 25 plots). With the 2-ha MMU, five vegetation types were recognized, resulting in an estimated 473 species for the study area. With the 50-ha MMU, 439 plant species were estimated for the four vegetation types recognized in the study area. With the 100-ha MMU, only three vegetation types were recognized, resulting in an estimated 341 plant species for the study area. Locally rare species and keystone ecosystems (areas of high or unique plant diversity) were missed at the 2-ha, 50-ha, and 100-ha scales. To evaluate the effects of minimum mapping unit size requires: (1) an initial stratification of homogeneous, heterogeneous, and rare habitat types; and (2) an evaluation of within-type and between-type heterogeneity generated by environmental

  17. Evaluation of collapsed cone convolution superposition (CCCS algorithms in prowess treatment planning system for calculating symmetric and asymmetric field size

    Directory of Open Access Journals (Sweden)

    Tamer Dawod

    2015-01-01

    Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.

  18. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    CERN Document Server

    Gu, Xuejun; Li, Jinsheng; Jia, Xun; Jiang, Steve B

    2011-01-01

    Targeting at developing an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009]. Dosimetric evaluations against MCSIM Monte Carlo dose calculations are conducted on 10 IMRT treatment plans with heterogeneous treatment regions (5 head-and-neck cases and 5 lung cases). For head and neck cases, when cavities exist near the target, the improvement with the 3D-density correction over the conventional FSPB algorithm is significant. However, when there are high-density dental filling materials in beam paths, the improvement is small and the accuracy of the new algorithm is still unsatisfactory. On the other hand, significant improvement of dose calculation accuracy is observed in all lung cases. Especially when the target is in the m...

  19. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

    Science.gov (United States)

    Mielke, Steven L; Truhlar, Donald G

    2016-01-21

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function. PMID:26801023

  20. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies

    Science.gov (United States)

    Mielke, Steven L.; Truhlar, Donald G.

    2016-01-01

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.

  1. Using Symbolic TI Calculators in Engineering Mathematics: Sample Tasks and Reflections from a Decade of Practice

    Science.gov (United States)

    Beaudin, Michel; Picard, Gilles

    2010-01-01

    Starting in September 1999, new students at ETS were required to own the TI-92 Plus or TI-89 symbolic calculator and since September 2002, the Voyage 200. Looking back at these ten years of working with a computer algebra system on every student's desk, one could ask whether the introduction of this hand-held technology has really forced teachers…

  2. Homeopathy: statistical significance versus the sample size in experiments with Toxoplasma gondii

    Directory of Open Access Journals (Sweden)

    Ana Lúcia Falavigna Guilherme

    2011-09-01

    , examined in its full length. This study was approved by the Ethics Committee for animal experimentation of the UEM - Protocol 036/2009. The data were compared using the tests Mann Whitney and Bootstrap [7] with the statistical software BioStat 5.0. Results and discussion: There was no significant difference when analyzed with the Mann-Whitney, even multiplying the "n" ten times (p=0.0618. The number of cysts observed in BIOT 200DH group was 4.5 ± 3.3 and 12.8 ± 9.7 in the CONTROL group. Table 1 shows the results obtained using the bootstrap analysis for each data changed from 2n until 2n+5, and their respective p-values. With the inclusion of more elements in the different groups, tested one by one, randomly, increasing gradually the samples, we observed the sample size needed to statistically confirm the results seen experimentally. Using 17 mice in group BIOT 200DH and 19 in the CONTROL group we have already observed statistical significance. This result suggests that experiments involving highly diluted substances and infection of mice with T. gondii should work with experimental groups with 17 animals at least. Despite the current and relevant ethical discussions about the number of animals used for experimental procedures the number of animals involved in each experiment must meet the characteristics of each item to be studied. In the case of experiments involving highly diluted substances, experimental animal models are still rudimentary and the biological effects observed appear to be also individualized, as described in literature for homeopathy [8]. The fact that the statistical significance was achieved by increasing the sample observed in this trial, tell us about a rare event, with a strong individual behavior, difficult to demonstrate in a result set, treated simply with a comparison of means or medians. Conclusion: Bootstrap seems to be an interesting methodology for the analysis of data obtained from experiments with highly diluted

  3. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  4. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    Science.gov (United States)

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  5. Efficient ab initio free energy calculations by classically assisted trajectory sampling

    Science.gov (United States)

    Wilson, Hugh F.

    2015-12-01

    A method for efficiently performing ab initio free energy calculations based on coupling constant thermodynamic integration is demonstrated. By the use of Boltzmann-weighted sums over states generated from a classical ensemble, the free energy difference between the classical and ab initio ensembles is readily available without the need for time-consuming integration over molecular dynamics trajectories. Convergence and errors in this scheme are discussed and characterised in terms of a quantity representing the degree of misfit between the classical and ab initio systems. Smaller but still substantial efficiency gains over molecular dynamics are also demonstrated for the calculation of average properties such as pressure and total energy for systems in equilibrium.

  6. Review of Methods for Calculating Pressure Profiles of Explosive Air Blast and its Sample Application

    OpenAIRE

    Chock, Jeffrey Mun Kong

    1999-01-01

    Blast profiles and two primary methods of determining them were reviewed for use in the creation of a computer program for calculating blast pressures which serves as a design tool to aid engineers or analysts in the study of structures subjected to explosive air blast. These methods were integrated into a computer program, BLAST.F, to generate air blast pressure profiles by one of these two differing methods. These two methods were compared after the creation of the program and can conserv...

  7. Calculation of gamma-ray mass attenuation coefficients of some Egyptian soil samples using Monte Carlo methods

    Science.gov (United States)

    Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan

    2014-08-01

    Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.

  8. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    Science.gov (United States)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-07-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  9. Diffuse myocardial fibrosis evaluation using cardiac magnetic resonance T1 mapping: sample size considerations for clinical trials

    Directory of Open Access Journals (Sweden)

    Liu Songtao

    2012-12-01

    Full Text Available Abstract Background Cardiac magnetic resonance (CMR T1 mapping has been used to characterize myocardial diffuse fibrosis. The aim of this study is to determine the reproducibility and sample size of CMR fibrosis measurements that would be applicable in clinical trials. Methods A modified Look-Locker with inversion recovery (MOLLI sequence was used to determine myocardial T1 values pre-, and 12 and 25min post-administration of a gadolinium-based contrast agent at 3 Tesla. For 24 healthy subjects (8 men; 29 ± 6 years, two separate scans were obtained a with a bolus of 0.15mmol/kg of gadopentate dimeglumine and b 0.1mmol/kg of gadobenate dimeglumine, respectively, with averaged of 51 ± 34 days between two scans. Separately, 25 heart failure subjects (12 men; 63 ± 14 years, were evaluated after a bolus of 0.15mmol/kg of gadopentate dimeglumine. Myocardial partition coefficient (λ was calculated according to (ΔR1myocardium/ΔR1blood, and ECV was derived from λ by adjusting (1-hematocrit. Results Mean ECV and λ were both significantly higher in HF subjects than healthy (ECV: 0.287 ± 0.034 vs. 0.267 ± 0.028, p=0.002; λ: 0.481 ± 0.052 vs. 442 ± 0.037, p Conclusion ECV and λ quantification have a low variability across scans, and could be a viable tool for evaluating clinical trial outcome.

  10. Dose calculation of Acuros XB and Anisotropic Analytical Algorithm in lung stereotactic body radiotherapy treatment with flattening filter free beams and the potential role of calculation grid size

    International Nuclear Information System (INIS)

    The study aimed to appraise the dose differences between Acuros XB (AXB) and Anisotropic Analytical Algorithm (AAA) in stereotactic body radiotherapy (SBRT) treatment for lung cancer with flattening filter free (FFF) beams. Additionally, the potential role of the calculation grid size (CGS) on the dose differences between the two algorithms was also investigated. SBRT plans with 6X and 10X FFF beams produced from the CT scan data of 10 patients suffering from stage I lung cancer were enrolled in this study. Clinically acceptable treatment plans with AAA were recalculated using AXB with the same monitor units (MU) and identical multileaf collimator (MLC) settings. Furthermore, different CGS (2.5 mm and 1 mm) in the two algorithms was also employed to investigate their dosimetric impact. Dose to planning target volumes (PTV) and organs at risk (OARs) between the two algorithms were compared. PTV was separated into PTV-soft (density in soft-tissue range) and PTV-lung (density in lung range) for comparison. The dose to PTV-lung predicted by AXB was found to be 1.33 ± 1.12% (6XFFF beam with 2.5 mm CGS), 2.33 ± 1.37% (6XFFF beam with 1 mm CGS), 2.81 ± 2.33% (10XFFF beam with 2.5 mm CGS) and 3.34 ± 1.76% (10XFFF beam with 1 mm CGS) lower compared with that by AAA, respectively. However, the dose directed to PTV-soft was comparable. For OARs, AXB predicted a slightly lower dose to the aorta, chest wall, spinal cord and esophagus, regardless of whether the 6XFFF or 10XFFF beam was utilized. Exceptionally, dose to the ipsilateral lung was significantly higher with AXB. AXB principally predicts lower dose to PTV-lung compared to AAA and the CGS contributes to the relative dose difference between the two algorithms

  11. Inert gases in a terra sample - Measurements in six grain-size fractions and two single particles from Lunar 20.

    Science.gov (United States)

    Heymann, D.; Lakatos, S.; Walton, J. R.

    1973-01-01

    Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.

  12. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    Science.gov (United States)

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  13. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    Science.gov (United States)

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  14. A Comparison of the Exact Kruskal-Wallis Distribution to Asymptotic Approximations for All Sample Sizes up to 105

    Science.gov (United States)

    Meyer, J. Patrick; Seaman, Michael A.

    2013-01-01

    The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…

  15. [The preparation of blood samples for the automatic measurement of erythrocyte size].

    Science.gov (United States)

    Kishchenko, G P; Gorbatov, A F; Kostyrev, O A

    1990-01-01

    The following procedure is proposed: fixation for 12 h in low ionic strength solution (4 percent formaldehyde in 50 mM sodium phosphate buffer), drying of the suspension drop on the slide, gallocyanin staining. All red cells were contrast-stained without light central spot. The sizes of different red cell groups on the slide differed by less than 5 percent. PMID:1704944

  16. Aerosol sampling: Comparison of two rotating impactors for field droplet sizing and volumetric measurements

    Science.gov (United States)

    This paper compares the collection characteristics of a new rotating impactor for ultra fine aerosols (FLB) with the industry standard (Hock). The volume and droplet size distribution collected by the rotating impactors were measured via spectroscopy and microscopy. The rotary impactors were co-lo...

  17. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and...

  18. Diel differences in 0+ fish samples: effect of river size and habitat

    Czech Academy of Sciences Publication Activity Database

    Janáč, Michal; Jurajda, Pavel

    2013-01-01

    Roč. 29, č. 1 (2013), s. 90-98. ISSN 1535-1459 R&D Projects: GA MŠk LC522 Institutional research plan: CEZ:AV0Z60930519 Keywords : young-of-the-year fish * diurnal * nocturnal * habitat complexity * stream size Subject RIV: EG - Zoology Impact factor: 1.971, year: 2013

  19. A method for calculating solvation structure on a sample surface from a force curve between a probe and the sample: One-dimensional version

    CERN Document Server

    Amano, Ken-ichi

    2012-01-01

    Recent surface force apparatus (SFA) and atomic force microscopy (AFM) can measure force curves between a probe and a sample surface in solvent. The force curve is thought as the solvation structure in some articles, because its shape is generally oscilltive and pitch of the oscillation is about the same as diameter of the solvent. However, it is not the solvation structure. It is only the force between the probe and the sample surface. Therefore, this brief paper presents a method for calculating the solvation structure from the force curve. The method is constructed by using integral equation theory, a statistical mechanics of liquid (Ornstein-Zernike equation coupled by hypernetted-chain closure). This method is considered to be important for elucidation of the solvation structure on a sample surface.

  20. Simple and efficient way of speeding up transmission calculations with k-point sampling

    Directory of Open Access Journals (Sweden)

    Jesper Toft Falkenberg

    2015-07-01

    Full Text Available The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally “cheap” post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical “expensive” first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.

  1. Respiratory tract dose calculation considering physiological parameters from samples of Brazilian population

    International Nuclear Information System (INIS)

    The Human Respiratory Tract Model proposed by the ICRP Publication 66 accounts for the morphology and physiology of the respiratory tract. The ICRP 66 presents deposition fraction in the respiratory tract regions considering reference values from Caucasian man. However, in order to obtain a more accurate assessment of intake and dose the ICRP recommends the use of specific information when they are available. The application of parameters from Brazilian population in the deposition and in the clearance model shows significant variations in the deposition fractions and in the fraction of inhaled activity transferred to blood. The main objective of this study is to evaluate the influence in dose calculation to each region of the respiratory tract when physiological parameters from Brazilian population are applied in the model. The purpose of the dosimetric model is to evaluate dose to each tissues of respiratory tract that are potentially risk from inhaled radioactive materials. The committed equivalent dose, H.T., is calculated by the product of the total number of transformations of the radionuclide in tissue source S over a period of fifty years after incorporation and of the energy absorbed per unit mass in the target tissue T, for each radiation emitted per transformation in tissue source S. The dosimetric model of Human Respirator y Tract was implemented in the software Excel for Windows (version 2000) and H.T. was determined in two stages. First it was calculated the number of total transformations, US, considering the fractional deposition of activity in each source tissue and then it was calculated the total energy absorbed per unit mass S.E.E., in the target tissue. It was assumed that the radionuclide emits an alpha particle with average energy of 5.15 MeV. The variation in the fractional deposition in the compartments of the respiratory tract in changing the physiological parameters from Caucasian to Brazilian adult man causes variation in the number of

  2. Simple and efficient way of speeding up transmission calculations with $k$-point sampling

    CERN Document Server

    Falkenberg, Jesper Toft

    2015-01-01

    The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally "cheap" post-processing scheme to interpolate transmission functions over $k$-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical "expensive" first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.

  3. A technique for calculating the amplitude distribution of propagated fields by Gaussian sampling.

    Science.gov (United States)

    Cywiak, Moisés; Morales, Arquímedes; Servín, Manuel; Gómez-Medina, Rafael

    2010-08-30

    We present a technique to solve numerically the Fresnel diffraction integral by representing a given complex function as a finite superposition of complex Gaussians. Once an accurate representation of these functions is attained, it is possible to find analytically its diffraction pattern. There are two useful consequences of this representation: first, the analytical results may be used for further theoretical studies and second, it may be used as a versatile and accurate numerical diffraction technique. The use of the technique is illustrated by calculating the intensity distribution in a vicinity of the focal region of an aberrated converging spherical wave emerging from a circular aperture. PMID:20940809

  4. Basic distribution free identification tests for small size samples of environmental data

    International Nuclear Information System (INIS)

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data

  5. Sample Sizes for Two-Group Second-Order Latent Growth Curve Models

    Science.gov (United States)

    Wanstrom, Linda

    2009-01-01

    Second-order latent growth curve models (S. C. Duncan & Duncan, 1996; McArdle, 1988) can be used to study group differences in change in latent constructs. We give exact formulas for the covariance matrix of the parameter estimates and an algebraic expression for the estimation of slope differences. Formulas for calculations of the required sample…

  6. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  7. A novel in situ method for sampling urban soil dust: Particle size distribution, trace metal concentrations, and stable lead isotopes

    International Nuclear Information System (INIS)

    In this study, a novel in situ sampling method was utilized to investigate the concentrations of trace metals and Pb isotope compositions among different particle size fractions in soil dust, bulk surface soil, and corresponding road dust samples collected within an urban environment. The aim of the current study was to evaluate the feasibility of using soil dust samples to determine trace metal contamination and potential risks in urban areas in comparison with related bulk surface soil and road dust. The results of total metal loadings and Pb isotope ratios revealed that soil dust is more sensitive than bulk surface soil to anthropogenic contamination in urban areas. The new in situ method is effective at collecting different particle size fractions of soil dust from the surface of urban soils, and that soil dust is a critical indicator of anthropogenic contamination and potential human exposure in urban settings. -- Highlights: ► A novel in situ sampling method for soil dust was proposed. ► Different particle size fractions of soil dust and bulk soil were studied. ► Soil dust is critical for understanding anthropogenic pollution in urban areas. ► Soil dust can be useful to estimate potential human exposure. -- Soil dust collected by a novel in situ sampling method can provide critical information about contamination and potential human exposure in urban environments

  8. Optimal sample size for predicting viability of cabbage and radish seeds based on near infrared spectra of single seeds

    DEFF Research Database (Denmark)

    Shetty, Nisha; Min, Tai-Gi; Gislum, René; Olesen, Merete Halkjær; Boelt, Birte

    2011-01-01

    -sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both...... cabbage and radish data. The misclassification rates at optimal sample size were 8%, 6% and 7% for cabbage and 3%, 3% and 2% for radish respectively for random method (averaged for 10 iterations), DUPLEX and CADEX algorithms. This was similar to the misclassification rate of 6% and 2% for cabbage and......The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub...

  9. A numerical simulation method for calculation of linear attenuation coefficients of unidentified sample materials in routine gamma ray spectrometry

    Directory of Open Access Journals (Sweden)

    Badawi Mohamed S.

    2015-01-01

    Full Text Available When using gamma ray spectrometry for radioactivity analysis of environmental samples (such as soil, sediment or ash of a living organism, relevant linear attenuation coefficients should be known - in order to calculate self-absorption in the sample bulk. This parameter is additionally important since the unidentified samples are normally different in composition and density from the reference ones (the latter being e. g. liquid sources, commonly used for detection efficiency calibration in radioactivity monitoring. This work aims at introducing a numerical simulation method for calculation of linear attenuation coefficients without the use of a collimator. The method is primarily based on calculations of the effective solid angles - compound parameters accounting for the emission and detection probabilities, as well as for the source-to-detector geometrical configuration. The efficiency transfer principle and average path lengths through the samples themselves are employed, too. The results obtained are compared with those from the NIST-XCOM data base; close agreement confirms the validity of the numerical simulation method approach.

  10. Effect of size and heterogeneity of samples on biomarker discovery: synthetic and real data assessment.

    OpenAIRE

    Barbara Di Camillo; Tiziana Sanavia; Matteo Martini; Giuseppe Jurman; Francesco Sambo; Annalisa Barla; Margherita Squillario; Cesare Furlanello; Gianna Toffolo; Claudio Cobelli

    2012-01-01

    MOTIVATION: The identification of robust lists of molecular biomarkers related to a disease is a fundamental step for early diagnosis and treatment. However, methodologies for the discovery of biomarkers using microarray data often provide results with limited overlap. These differences are imputable to 1) dataset size (few subjects with respect to the number of features); 2) heterogeneity of the disease; 3) heterogeneity of experimental protocols and computational pipelines employed in the a...

  11. RISK-ASSESSMENT PROCEDURES AND ESTABLISHING THE SIZE OF SAMPLES FOR AUDITING FINANCIAL STATEMENTS

    Directory of Open Access Journals (Sweden)

    Daniel Botez

    2014-12-01

    Full Text Available In auditing financial statements, the procedures for the assessment of the risks and the calculation of the materiality differ from an auditor to another, by audit cabinet policy or advice professional bodies. All, however, have the reference International Audit Standards ISA 315 “Identifying and assessing the risks of material misstatement through understanding the entity and its environment” and ISA 320 “Materiality in planning and performing an audit”. On the basis of specific practices auditors in Romania, the article shows some laborious and examples of these aspects. Such considerations are presented evaluation of the general inherent risk, a specific inherent risk, the risk of control and the calculation of the materiality.

  12. RISK-ASSESSMENT PROCEDURES AND ESTABLISHING THE SIZE OF SAMPLES FOR AUDITING FINANCIAL STATEMENTS

    OpenAIRE

    Daniel Botez

    2014-01-01

    In auditing financial statements, the procedures for the assessment of the risks and the calculation of the materiality differ from an auditor to another, by audit cabinet policy or advice professional bodies. All, however, have the reference International Audit Standards ISA 315 “Identifying and assessing the risks of material misstatement through understanding the entity and its environment” and ISA 320 “Materiality in planning and performing an audit”. On the basis of specific practices au...

  13. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    International Nuclear Information System (INIS)

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy

  14. Active measurement of radon and thoron exhalation from soil and building materials samples and effect of grain size

    International Nuclear Information System (INIS)

    Radon (222Rn) and Thoron (220Rn) emission from soil and building materials is considered as primary sources for inhalation dose to a person in indoor environment. In view of this, experiments have been carried out to determine radon and thoron emission from samples of soil and building materials using BARC developed smart radon and thoron monitor. Samples that were subjected to analysis included soil, sand, cement, flyash, POP, snewcem, lime powder, chalk putty and wallputty. Each sample was kept in a leak tight metal chamber, connected to a radon/thoron monitor to measure radon and thoron concentration at different time interval. Thoron being short-lived (half life of 55.6 s), a minimum thickness of building material was maintained such that the thoron surface exhalation rate will be independent of sample size. In case of radon, the air volume of the set up was kept sufficient enough to neglect the possible back diffusion effect. The influence of grain size on radon and thoron exhalation rates in soil and fly ash samples has been investigated by measuring the packing density and % porosity of samples. (author)

  15. Calculating the free energy of transfer of small solutes into a model lipid membrane: Comparison between metadynamics and umbrella sampling

    Science.gov (United States)

    Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo; Monticelli, Luca; Rossi, Giulia

    2015-10-01

    We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.

  16. Calculating the free energy of transfer of small solutes into a model lipid membrane: Comparison between metadynamics and umbrella sampling

    International Nuclear Information System (INIS)

    We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time

  17. Calculating the free energy of transfer of small solutes into a model lipid membrane: Comparison between metadynamics and umbrella sampling

    Energy Technology Data Exchange (ETDEWEB)

    Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo; Rossi, Giulia, E-mail: giulia.rossi@gmail.com [Physics Department, University of Genoa and CNR-IMEM, Via Dodecaneso 33, 16146 Genoa (Italy); Monticelli, Luca [Bases Moléculaires et Structurales des Systèmes Infectieux (BMSSI), CNRS UMR 5086, 7 Passage du Vercors, 69007 Lyon (France)

    2015-10-14

    We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.

  18. Item Characteristic Curve Parameters: Effects of Sample Size on Linear Equating.

    Science.gov (United States)

    Ree, Malcom James; Jensen, Harald E.

    By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…

  19. The effect of sample size on fresh plasma thromboplastin ISI determination

    DEFF Research Database (Denmark)

    Poller, L; Van Den Besselaar, A M; Jespersen, J; Tripodi, A; Houghton, D

    1999-01-01

    reduced progressively by a computer program which generated random numbers to provide 1000 different selections for each reduced sample at each participant laboratory. Results were compared with those of the full set of 20 normal and 60 coumarin plasma calibrations. With the human reagent, 20 coumarins...

  20. Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests

    Science.gov (United States)

    Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine

    2012-01-01

    Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…

  1. SU-E-T-586: Field Size Dependence of Output Factor for Uniform Scanning Proton Beams: A Comparison of TPS Calculation, Measurement and Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to a 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases

  2. Chemical Size Distribution of Suburban Aerosol Sampled in Prague 2008 Using Humidity Controlled Inlets

    Czech Academy of Sciences Publication Activity Database

    Štefancová, Lucia; Schwarz, Jaroslav; Maenhaut, W.; Chi, X.; Smolík, Jiří

    Prague : Orgit, 2009 - (Smolík, J.; O'Dowd, C.), s. 155-158 ISBN 978-80-02-12161-2. [International Conference Nucleation and Atmospheric Aerosols /18./. Prague (CZ), 10.08.2009-14.08.2009] R&D Projects: GA MŠk OC 106; GA MŠk ME 941 Institutional research plan: CEZ:AV0Z40720504 Keywords : mass-size distribution * chemical composition * atmospheric aerosols Subject RIV: CF - Physical ; Theoretical Chemistry http://www.icnaa.cz/

  3. Size selectivity of standardized multimesh gillnets in sampling coarse European species

    Czech Academy of Sciences Publication Activity Database

    Prchalová, Marie; Kubečka, Jan; Říha, Milan; Mrkvička, Tomáš; Vašek, Mojmír; Jůza, Tomáš; Kratochvíl, Michal; Peterka, Jiří; Draštík, Vladislav; Křížek, J.

    2009-01-01

    Roč. 96, č. 1 (2009), s. 51-57. ISSN 0165-7836. [Fish Stock Assessment Methods for Lakes and Reservoirs: Towards the true picture of fish stock. České Budějovice, 11.09.2007-15.09.2007] R&D Projects: GA AV ČR(CZ) 1QS600170504; GA ČR(CZ) GA206/07/1392 Institutional research plan: CEZ:AV0Z60170517 Keywords : gillnet * seine * size selectivity * roach * perch * rudd Subject RIV: EH - Ecology, Behaviour Impact factor: 1.531, year: 2009

  4. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    International Nuclear Information System (INIS)

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CFSSDEorgan) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CFSSDEorgan were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCFSSDEorgan were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CFSSDEorgan, was compared to previously published pediatric patient doses that

  5. Calculating structural and geometrical parameters by laboratory experiments and X-Ray microtomography: a comparative study applied to a limestone sample

    Directory of Open Access Journals (Sweden)

    L. Luquot

    2015-11-01

    Full Text Available The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT images and laboratory experiments. Total and effective porosity, surface-to-volume ratio, pore size distribution, permeability, tortuosity and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here have been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occured during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory one. We observed that pore size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we wrapped up that computing XMT images to determine transport, geometrical and petrophysical parameters provides similar results than the one measured at the laboratory but with much shorter durations.

  6. Calculating structural and geometrical parameters by laboratory experiments and X-Ray microtomography: a comparative study applied to a limestone sample

    Science.gov (United States)

    Luquot, Linda; Hebert, Vanessa; Rodriguez, Olivier

    2016-04-01

    The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT) images and laboratory experiments. Total and effective porosity, surface-to-volume ratio, pore size distribution, permeability, tortuosity and effective diffusion coeffcient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here have been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occured during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory one. We observed that pore size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coeffcient, we concluded that both approaches are valuable and give similar results. Nevertheless, we wrapped up that computing XMT images to determine transport, geometrical and petrophysical parameters provides similar results than the one measured at the laboratory but with much shorter durations.

  7. Calculating structural and geometrical parameters by laboratory measurements and X-ray microtomography: a comparative study applied to a limestone sample before and after a dissolution experiment

    Science.gov (United States)

    Luquot, Linda; Hebert, Vanessa; Rodriguez, Olivier

    2016-03-01

    The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT) images and laboratory experiments. Total and effective porosity, pore-size distribution, tortuosity, and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here has been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occurred during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows us to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory measurement. We observed that pore-size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we concluded that computing XMT images to determine transport, geometrical, and petrophysical parameters provide similar results to those measured at the laboratory but with much shorter durations.

  8. Effect of sample sizes on characteristics of cyclic crack resistance in heat resistant steels. Communication 1

    International Nuclear Information System (INIS)

    Sizes of specimens are studied for their effect on cyclic crack resistance of 15Kh2NMFA, 15Kh2MFA(1), 15Kh2MFA(2) and 08Kh18N10T steels which represent a wide class of structural materials according to their mechanical properties and crack resistance characteristics. Results of the study are presented. It is established that an increase of the specimen thickness from 25 to 150 mm does not affect the Ksub(th) and Ksub(fc) characteristics and propagation rate of fatigue cracks only for high strength embrittled 15Kh2MFA(2) steel. Effect of the specimen sizes on regularities of fatigue crack propagation for plastic 15Kh0NMFA, 15Kh2MFA(1) and 08Kh18N10T steels is ambiguous in the case of experimental data representation in ''da/dN-Ksub(Isub(max))'' coordinates and may manifest within t he whole rage of Ksub(Isub(max)) variation from Ksub(th) to Ksub(Q)sup(f)

  9. Ultrasonic detection and sizing of cracks in cast stainless steel samples

    International Nuclear Information System (INIS)

    The test consisted of 15 samples of cast stainless steel, each with a weld. Some of the specimens were provided with artificially made thermal fatique cracks. The inspection was performed with the P-scan method. The investigations showed an improvement of recognizability relative to earlier investigations. One probe, the dual type, longitudinal wave 45 degrees, low frequence 0.5-1 MHz gives the best results. (G.B.)

  10. Effect of model choice and sample size on statistical tolerance limits

    International Nuclear Information System (INIS)

    Statistical tolerance limits are estimates of large (or small) quantiles of a distribution, quantities which are very sensitive to the shape of the tail of the distribution. The exact nature of this tail behavior cannot be ascertained brom small samples, so statistical tolerance limits are frequently computed using a statistical model chosen on the basis of theoretical considerations or prior experience with similar populations. This report illustrates the effects of such choices on the computations

  11. Second generation laser-heated microfurnace for the preparation of microgram-sized graphite samples

    Science.gov (United States)

    Yang, Bin; Smith, A. M.; Long, S.

    2015-10-01

    We present construction details and test results for two second-generation laser-heated microfurnaces (LHF-II) used to prepare graphite samples for Accelerator Mass Spectrometry (AMS) at ANSTO. Based on systematic studies aimed at optimising the performance of our prototype laser-heated microfurnace (LHF-I) (Smith et al., 2007 [1]; Smith et al., 2010 [2,3]; Yang et al., 2014 [4]), we have designed the LHF-II to have the following features: (i) it has a small reactor volume of 0.25 mL allowing us to completely graphitise carbon dioxide samples containing as little as 2 μg of C, (ii) it can operate over a large pressure range (0-3 bar) and so has the capacity to graphitise CO2 samples containing up to 100 μg of C; (iii) it is compact, with three valves integrated into the microfurnace body, (iv) it is compatible with our new miniaturised conventional graphitisation furnaces (MCF), also designed for small samples, and shares a common vacuum system. Early tests have shown that the extraneous carbon added during graphitisation in each LHF-II is of the order of 0.05 μg, assuming 100 pMC activity, similar to that of the prototype unit. We use a 'budget' fibre packaged array for the diode laser with custom built focusing optics. The use of a new infrared (IR) thermometer with a short focal length has allowed us to decrease the height of the light-proof safety enclosure. These innovations have produced a cheaper and more compact device. As with the LHF-I, feedback control of the catalyst temperature and logging of the reaction parameters is managed by a LabVIEW interface.

  12. Condition monitoring of high temperature components with sub-sized samples - HTR2008-58195

    International Nuclear Information System (INIS)

    Advanced nuclear plants are designed for long-term operation in quite demanding environments. Limited operation experience with the materials used in such plants necessitate a reliable assessment of damage and residual life of components. Non-destructive condition monitoring of damage is difficult, if not impossible for many materials. Periodic investigation of small samples taken from well defined locations in the plant could provide an attractive tool for damage assessments. This paper will discuss possibilities of using very small samples taken from plant locations for complementary condition monitoring. Techniques such as micro/nano-indentation, micro-pillar compression, micro bending, small punch and thin strip testing can be used for the determination of local mechanical properties. Advanced preparation techniques such as focused ion beam (FIB) allow the preparation of samples from these small volumes for micro-structural analyses with transmission electron microscope (TEM) and advanced X-ray synchrotron techniques. Modeling techniques (e.g. dislocation dynamics DD) can provide a quantitative link between microstructure and mechanical properties. Using examples from ferritic oxide dispersion strengthened materials the DD approach is highlighted to understand component life assessments. (authors)

  13. Aerosols and their sources at Summit Greenland - First results of continuous size- and time-resolved sampling

    Science.gov (United States)

    VanCuren, Richard A.; Cahill, Thomas; Burkhart, John; Barnes, David; Zhao, Yongjing; Perry, Kevin; Cliff, Steven; McConnell, Joe

    2012-06-01

    An ongoing program to continuously collect time- and size-resolved aerosol samples from ambient air at Summit Station, Greenland (72.6 N, 38.5 W) is building a long-term data base to both record individual transport events and provide long-term temporal context for past and future intensive studies at the site. As a "first look" at this data set, analysis of samples collected from summer 2005 to spring 2006 demonstrates the utility of continuous sampling to characterize air masses over the ice pack, document individual aerosol transport events, and develop a long-term record. Seven source-related aerosol types were identified in this analysis: Asian dust, Saharan dust, industrial combustion, marine with combustion tracers, fresh coarse volcanic tephra, and aged volcanic plume with fine tephra and sulfate, and the well-mixed background "Arctic haze". The Saharan dust is a new discovery; the other types are consistent with those reported from previous work using snow pits and intermittent ambient air sampling during intensive study campaigns. Continuous sampling complements the fundamental characterization of Greenland aerosols developed in intensive field programs by providing a year-round record of aerosol size and composition at all temporal scales relevant to ice core analysis, ranging from individual deposition events and seasonal cycles, to a record of inter-annual variability of aerosols from both natural and anthropogenic sources.

  14. Classifier performance estimation under the constraint of a finite sample size: resampling schemes applied to neural network classifiers.

    Science.gov (United States)

    Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir

    2008-01-01

    In a practical classifier design problem the sample size is limited, and the available finite sample needs to be used both to design a classifier and to predict the classifier's performance for the true population. Since a larger sample is more representative of the population, it is advantageous to design the classifier with all the available cases, and to use a resampling technique for performance prediction. We conducted a Monte Carlo simulation study to compare the ability of different resampling techniques in predicting the performance of a neural network (NN) classifier designed with the available sample. We used the area under the receiver operating characteristic curve as the performance index for the NN classifier. We investigated resampling techniques based on the cross-validation, the leave-one-out method, and three different types of bootstrapping, namely, the ordinary, .632, and .632+ bootstrap. Our results indicated that, under the study conditions, there can be a large difference in the accuracy of the prediction obtained from different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited data set. PMID:18234468

  15. Endocranial volume of Australopithecus africanus: new CT-based estimates and the effects of missing data and small sample size.

    Science.gov (United States)

    Neubauer, Simon; Gunz, Philipp; Weber, Gerhard W; Hublin, Jean-Jacques

    2012-04-01

    Estimation of endocranial volume in Australopithecus africanus is important in interpreting early hominin brain evolution. However, the number of individuals available for investigation is limited and most of these fossils are, to some degree, incomplete and/or distorted. Uncertainties of the required reconstruction ('missing data uncertainty') and the small sample size ('small sample uncertainty') both potentially bias estimates of the average and within-group variation of endocranial volume in A. africanus. We used CT scans, electronic preparation (segmentation), mirror-imaging and semilandmark-based geometric morphometrics to generate and reconstruct complete endocasts for Sts 5, Sts 60, Sts 71, StW 505, MLD 37/38, and Taung, and measured their endocranial volumes (EV). To get a sense of the reliability of these new EV estimates, we then used simulations based on samples of chimpanzees and humans to: (a) test the accuracy of our approach, (b) assess missing data uncertainty, and (c) appraise small sample uncertainty. Incorporating missing data uncertainty of the five adult individuals, A. africanus was found to have an average adult endocranial volume of 454-461 ml with a standard deviation of 66-75 ml. EV estimates for the juvenile Taung individual range from 402 to 407 ml. Our simulations show that missing data uncertainty is small given the missing portions of the investigated fossils, but that small sample sizes are problematic for estimating species average EV. It is important to take these uncertainties into account when different fossil groups are being compared. PMID:22365336

  16. Sediment Grain-Size and Loss-on-Ignition Analyses from 2002 Englebright Lake Coring and Sampling Campaigns

    Science.gov (United States)

    Snyder, Noah P.; Allen, James R.; Dare, Carlin; Hampton, Margaret A.; Schneider, Gary; Wooley, Ryan J.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.

    2004-01-01

    This report presents sedimentologic data from three 2002 sampling campaigns conducted in Englebright Lake on the Yuba River in northern California. This work was done to assess the properties of the material deposited in the reservoir between completion of Englebright Dam in 1940 and 2002, as part of the Upper Yuba River Studies Program. Included are the results of grain-size-distribution and loss-on-ignition analyses for 561 samples, as well as an error analysis based on replicate pairs of subsamples.

  17. Radiocarbon dating of milligram-size samples using gas proportional counters: an evaluation of precision and of design parameters

    International Nuclear Information System (INIS)

    Radiocarbon dating parameters, such as instrumental techniques used, dating precision achieved, sample size, cost and availability of equipment and, in more detail, the merit of small gas proportional counting systems are considered. It is shown that small counters capable of handling 10-100mg of carbon are a viable proposition in terms of achievable precision and in terms of sample turnover, if some 10 mini-counters are operated simultaneously within the same shield. After consideration of the factors affecting the performance of a small gas proportional system it is concluded that an automatic, labour saving, cost effective and efficient carbon dating system, based on some sixteen 10 ml-size counters operating in parallel, could be built using state-of-art knowledge and components

  18. IN SITU NON-INVASIVE SOIL CARBON ANALYSIS: SAMPLE SIZE AND GEOSTATISTICAL CONSIDERATIONS

    International Nuclear Information System (INIS)

    I discuss a new approach for quantitative carbon analysis in soil based on INS. Although this INS method is not simple, it offers critical advantages not available with other newly emerging modalities. The key advantages of the INS system include the following: (1) It is a non-destructive method, i.e., no samples of any kind are taken. A neutron generator placed above the ground irradiates the soil, stimulating carbon characteristic gamma-ray emission that is counted by a detection system also placed above the ground. (2) The INS system can undertake multielemental analysis, so expanding its usefulness. (3) It can be used either in static or scanning modes. (4) The volume sampled by the INS method is large with a large footprint; when operating in a scanning mode, the sampled volume is continuous. (5) Except for a moderate initial cost of about $100,000 for the system, no additional expenses are required for its operation over two to three years after which a NG has to be replenished with a new tube at an approximate cost of $10,000, this regardless of the number of sites analyzed. In light of these characteristics, the INS system appears invaluable for monitoring changes in the carbon content in the field. For this purpose no calibration is required; by establishing a carbon index, changes in carbon yield can be followed with time in exactly the same location, thus giving a percent change. On the other hand, with calibration, it can be used to determine the carbon stock in the ground, thus estimating the soil's carbon inventory. However, this requires revising the standard practices for deciding upon the number of sites required to attain a given confidence level, in particular for the purposes of upward scaling. Then, geostatistical considerations should be incorporated in considering properly the averaging effects of the large volumes sampled by the INS system that would require revising standard practices in the field for determining the number of spots to be

  19. IN SITU NON-INVASIVE SOIL CARBON ANALYSIS: SAMPLE SIZE AND GEOSTATISTICAL CONSIDERATIONS.

    Energy Technology Data Exchange (ETDEWEB)

    WIELOPOLSKI, L.

    2005-04-01

    I discuss a new approach for quantitative carbon analysis in soil based on INS. Although this INS method is not simple, it offers critical advantages not available with other newly emerging modalities. The key advantages of the INS system include the following: (1) It is a non-destructive method, i.e., no samples of any kind are taken. A neutron generator placed above the ground irradiates the soil, stimulating carbon characteristic gamma-ray emission that is counted by a detection system also placed above the ground. (2) The INS system can undertake multielemental analysis, so expanding its usefulness. (3) It can be used either in static or scanning modes. (4) The volume sampled by the INS method is large with a large footprint; when operating in a scanning mode, the sampled volume is continuous. (5) Except for a moderate initial cost of about $100,000 for the system, no additional expenses are required for its operation over two to three years after which a NG has to be replenished with a new tube at an approximate cost of $10,000, this regardless of the number of sites analyzed. In light of these characteristics, the INS system appears invaluable for monitoring changes in the carbon content in the field. For this purpose no calibration is required; by establishing a carbon index, changes in carbon yield can be followed with time in exactly the same location, thus giving a percent change. On the other hand, with calibration, it can be used to determine the carbon stock in the ground, thus estimating the soil's carbon inventory. However, this requires revising the standard practices for deciding upon the number of sites required to attain a given confidence level, in particular for the purposes of upward scaling. Then, geostatistical considerations should be incorporated in considering properly the averaging effects of the large volumes sampled by the INS system that would require revising standard practices in the field for determining the number of spots to

  20. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085

  1. A Rounding by Sampling Approach to the Minimum Size k-Arc Connected Subgraph Problem

    CERN Document Server

    Laekhanukit, Bundit; Singh, Mohit

    2012-01-01

    In the k-arc connected subgraph problem, we are given a directed graph G and an integer k and the goal is the find a subgraph of minimum cost such that there are at least k-arc disjoint paths between any pair of vertices. We give a simple (1 + 1/k)-approximation to the unweighted variant of the problem, where all arcs of G have the same cost. This improves on the 1 + 2/k approximation of Gabow et al. [GGTW09]. Similar to the 2-approximation algorithm for this problem [FJ81], our algorithm simply takes the union of a k in-arborescence and a k out-arborescence. The main difference is in the selection of the two arborescences. Here, inspired by the recent applications of the rounding by sampling method (see e.g. [AGM+ 10, MOS11, OSS11, AKS12]), we select the arborescences randomly by sampling from a distribution on unions of k arborescences that is defined based on an extreme point solution of the linear programming relaxation of the problem. In the analysis, we crucially utilize the sparsity property of the ext...

  2. RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes

    Directory of Open Access Journals (Sweden)

    Danny J. Kelly

    2005-01-01

    Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.

  3. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  4. Sample size requirements for in situ vegetation and substrate classifications in shallow, natural Nebraska Lakes

    Science.gov (United States)

    Paukert, C.P.; Willis, D.W.; Holland, R.S.

    2002-01-01

    We assessed the precision of visual estimates of vegetation and substrate along transects in 15 shallow, natural Nebraska lakes. Vegetation type (submergent or emergent), vegetation density (sparse, moderate, or dense), and substrate composition (percentage sand, muck, and clay; to the nearest 10%) were estimated at 25-70 sampling sites per lake by two independent observers. Observer agreement for vegetation type was 92%. Agreement ranged from 62.5% to 90.1% for substrate composition. Agreement was also high (72%) for vegetation density estimates. The relatively high agreement between estimates was likely attributable to the homogeneity of the lake habitats. Nearly 90% of the substrate sites were classified as 0% clay, and over 68% as either 0% or 100% sand. When habitats were homogeneous, less than 40 sampling sites per lake were required for 95% confidence that habitat composition was within 10% of the true mean, and over 100 sites were required when habitats were heterogeneous. Our results suggest that relatively high precision is attainable for vegetation and substrate mapping in shallow, natural lakes.

  5. Wind tunnel study of twelve dust samples by large particle size

    Science.gov (United States)

    Shannak, B.; Corsmeier, U.; Kottmeier, Ch.; Al-azab, T.

    2014-12-01

    Due to the lack of data by large dust and sand particle, the fluid dynamics characteristics, hence the collection efficiencies of different twelve dust samplers have been experimentally investigated. Wind tunnel tests were carried out at wind velocities ranging from 1 up to 5.5 ms-1. As a large solid particle of 0.5 and 1 mm in diameter, Polystyrene pellets called STYRO Beads or polystyrene sphere were used instead of sand or dust. The results demonstrate that the collection efficiency is relatively acceptable only of eight tested sampler and lie between 60 and 80% depending on the wind velocity and particle size. These samplers are: the Cox Sand Catcher (CSC), the British Standard Directional Dust Gauge (BSD), the Big Spring Number Eight (BSNE), the Suspended Sediment Trap (SUSTRA), the Modified Wilson and Cooke (MWAC), the Wedge Dust Flux Gauge (WDFG), the Model Series Number 680 (SIERRA) and the Pollet Catcher (POLCA). Generally they can be slightly recommended as suitable dust samplers but with collecting error of 20 up to 40%. However the BSNE verify the best performance with a catching error of about 20% and can be with caution selected as a suitable dust sampler. Quite the contrary, the other four tested samplers which are the Marble Dust Collector (MDCO), the United States Geological Survey (USGS), the Inverted Frisbee Sampler (IFS) and the Inverted Frisbee Shaped Collecting Bowl (IFSCB) cannot be recommended due to their very low collection efficiency of 5 up to 40%. In total the efficiency of sampler may be below 0.5, depending on the frictional losses (caused by the sampler geometry) in the fluid and the particle's motion, and on the intensity of airflow acceleration near the sampler inlet. Therefore, the literature data of dust are defective and insufficient. To avoid false collecting data and hence inaccurate mass flux modeling, the geometry of the dust sampler should be considered and furthermore improved.

  6. Small population size of Pribilof Rock Sandpipers confirmed through distance-sampling surveys in Alaska

    Science.gov (United States)

    Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E., Jr.; Dementyev, Maksim N.; Handel, Colleen M.

    2012-01-01

    The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.

  7. Performance of analytical methods for overdispersed counts in cluster randomized trials: sample size, degree of clustering and imbalance.

    Science.gov (United States)

    Durán Pacheco, Gonzalo; Hattendorf, Jan; Colford, John M; Mäusezahl, Daniel; Smith, Thomas

    2009-10-30

    Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community-intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster-size imbalance. The compared methods are: (i) the two-sample t-test of cluster-level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model-based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes-HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random-effects estimation. GLMM and Bayes-HM performed better in general with Bayes-HM producing less dispersed results for random-effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster-level t-test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community-intervention trial on Solar Water Disinfection in rural Bolivia. PMID:19672840

  8. Preliminary calculational analysis of the actinide samples from FP-4 exposed in the Dounreay Prototype Fast Reactor

    International Nuclear Information System (INIS)

    This report discusses the current status of results from an extensive experiment on the irradiation of selected actinides in a fast reactor. These actinides ranged from thorium to curium. They were irradiated in the core of the Dounreay Prototype Fast Reactor. Rates for depletion, transmutation, and fission-product generation were experimentally measured, and, in turn, were calculated using current cross-section and fission-yield data. Much of the emphasis is on the comparison between experimental and calculated values for both actinide and fission-product concentrations. Some of the discussion touches on the adequacy of current cross-section and fission-yield data. However, the main purposes of the report are: to collect in one place the most recent yield data, to discuss the comparisons between the experimental and calculated results, to discuss each sample that was irradiated giving details of any adjustments needed or specific problems encountered, and to give a chronology of the analysis as it pertained to the set of samples (referred to as FP-4 samples) that constitutes the most extensively irradiated and final set. The results and trends reported here, together with those discussions touching on current knowledge about cross sections and fission yields, are intended to serve as a starting point for further analysis. In general, these results are encouraging with regard to the adequacy of much of the currently available nuclear data in this region of the periodic table. But there are some cases where adjustments and improvements can be suggested. However, the application of these results in consolidating current cross-section and fission-yield data must await further analysis

  9. Influence of pH, Temperature and Sample Size on Natural and Enforced Syneresis of Precipitated Silica

    Directory of Open Access Journals (Sweden)

    Sebastian Wilhelm

    2015-12-01

    Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.

  10. A solution for an inverse problem in liquid AFM: calculation of three-dimensional solvation structure on a sample surface

    CERN Document Server

    Amano, Ken-ich

    2013-01-01

    Recent frequency-modulated atomic force microscopy (FM-AFM) can measure three-dimensional force distribution between a probe and a sample surface in liquid. The force distribution is, in the present circumstances, assumed to be solvation structure on the sample surface, because the force distribution and solvation structure have somewhat similar shape. However, the force distribution is exactly not the solvation structure. If we would like to obtain the solvation structure by using the liquid AFM, a method for transforming the force distribution into the solvation structure is necessary. Therefore, in this letter, we present the transforming method in a brief style. We call this method as a solution for an inverse problem, because the solvation structure is obtained at first and the force distribution is obtained next in general calculation processes. The method is formulated (mainly) by statistical mechanics of liquid.

  11. Quantitative evaluation of size selective precipitation of Mn-doped ZnS quantum dots by size distributions calculated from UV/Vis absorbance spectra

    International Nuclear Information System (INIS)

    We demonstrate the quantitative evaluation of the sharp classification of manganese-doped zinc sulfide (ZnS:Mn) quantum dots by size selective precipitation. The particles were characterized by the direct conversion of absorbance spectra to particle size distributions (PSDs) and high-resolution transmission electron micrographs (HRTEM). Gradual addition of a poor solvent (2-propanol) to the aqueous colloid led to the flocculation of larger particles. Though the starting suspension after synthesis had an already narrow PSD between 1.5 and 3.2 nm, different particle size fractions were subsequently isolated by the careful adjustment of the good solvent/poor solvent ratio. Moreover, due to the fact that for the analysis of the classification results the size distributions were available, an in-depth understanding of the quality of the distinct classification steps could be achieved. From the PSDs of the feed, as well as the coarse and the fine fractions with their corresponding yields determined after each classification step, an optimum after the first addition of poor solvent was identified with a maximal separation sharpness κ as high as 0.75. Only by the quantitative evaluation of classification results leading to an in-depth understanding of the relevant driving forces, a future transfer of this lab scale post-processing to larger quantities will be possible.

  12. Uncertainty in nutrient loads from tile-drained landscapes: Effect of sampling frequency, calculation algorithm, and compositing strategy

    Science.gov (United States)

    Williams, Mark R.; King, Kevin W.; Macrae, Merrin L.; Ford, William; Van Esbroeck, Chris; Brunke, Richard I.; English, Michael C.; Schiff, Sherry L.

    2015-11-01

    Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of nutrient load estimates in large, naturally drained watersheds, few studies have focused on tile-drained fields and small tile-drained headwater watersheds. The objective of this study was to quantify uncertainty in annual dissolved reactive phosphorus (DRP) and nitrate-nitrogen (NO3-N) load estimates from four tile-drained fields and two small tile-drained headwater watersheds in Ohio, USA and Ontario, Canada. High temporal resolution datasets of discharge (10-30 min) and nutrient concentration (2 h to 1 d) were collected over a 1-2 year period at each site and used to calculate a reference nutrient load. Monte Carlo simulations were used to subsample the measured data to assess the effects of sample frequency, calculation algorithm, and compositing strategy on the uncertainty of load estimates. Results showed that uncertainty in annual DRP and NO3-N load estimates was influenced by both the sampling interval and the load estimation algorithm. Uncertainty in annual nutrient load estimates increased with increasing sampling interval for all of the load estimation algorithms tested. Continuous discharge measurements and linear interpolation of nutrient concentrations yielded the least amount of uncertainty, but still tended to underestimate the reference load. Compositing strategies generally improved the precision of load estimates compared to discrete grab samples; however, they often reduced the accuracy. Based on the results of this study, we recommended that nutrient concentration be measured every 13-26 h for DRP and every 2.7-17.5 d for NO3-N in tile-drained fields and small tile-drained headwater watersheds to accurately (±10%) estimate annual loads.

  13. Organic composition of size segregated atmospheric particulate matter, during summer and winter sampling campaigns at representative sites in Madrid, Spain

    Science.gov (United States)

    Mirante, Fátima; Alves, Célia; Pio, Casimiro; Pindado, Oscar; Perez, Rosa; Revuelta, M.a. Aranzazu; Artiñano, Begoña

    2013-10-01

    Madrid, the largest city of Spain, has some and unique air pollution problems, such as emissions from residential coal burning, a huge vehicle fleet and frequent African dust outbreaks, along with the lack of industrial emissions. The chemical composition of particulate matter (PM) was studied during summer and winter sampling campaigns, conducted in order to obtain size-segregated information at two different urban sites (roadside and urban background). PM was sampled with high volume cascade impactors, with 4 stages: 10-2.5, 2.5-1, 1-0.5 and < 0.5 μm. Samples were solvent extracted and organic compounds were identified and quantified by GC-MS. Alkanes, polycyclic aromatic hydrocarbons (PAHs), alcohols and fatty acids were chromatographically resolved. The PM1-2.5 was the fraction with the highest mass percentage of organics. Acids were the organic compounds that dominated all particle size fractions. Different organic compounds presented apparently different seasonal characteristics, reflecting distinct emission sources, such as vehicle exhausts and biogenic sources. The benzo[a]pyrene equivalent concentrations were lower than 1 ng m- 3. The estimated carcinogenic risk is low.

  14. Comparing within-subject classification and regularization methods in fMRI for large and small sample sizes.

    Science.gov (United States)

    Churchill, Nathan W; Yourganov, Grigori; Strother, Stephen C

    2014-09-01

    In recent years, a variety of multivariate classifier models have been applied to fMRI, with different modeling assumptions. When classifying high-dimensional fMRI data, we must also regularize to improve model stability, and the interactions between classifier and regularization techniques are still being investigated. Classifiers are usually compared on large, multisubject fMRI datasets. However, it is unclear how classifier/regularizer models perform for within-subject analyses, as a function of signal strength and sample size. We compare four standard classifiers: Linear and Quadratic Discriminants, Logistic Regression and Support Vector Machines. Classification was performed on data in the linear kernel (covariance) feature space, and classifiers are tuned with four commonly-used regularizers: Principal Component and Independent Component Analysis, and penalization of kernel features using L₁ and L₂ norms. We evaluated prediction accuracy (P) and spatial reproducibility (R) of all classifier/regularizer combinations on single-subject analyses, over a range of three different block task contrasts and sample sizes for a BOLD fMRI experiment. We show that the classifier model has a small impact on signal detection, compared to the choice of regularizer. PCA maximizes reproducibility and global SNR, whereas Lp -norms tend to maximize prediction. ICA produces low reproducibility, and prediction accuracy is classifier-dependent. However, trade-offs in (P,R) depend partly on the optimization criterion, and PCA-based models are able to explore the widest range of (P,R) values. These trends are consistent across task contrasts and data sizes (training samples range from 6 to 96 scans). In addition, the trends in classifier performance are consistent for ROI-based classifier analyses. PMID:24639383

  15. Comparison of three tests of homogeneity of odds ratios in multicenter trials with unequal sample sizes within and among centers

    Directory of Open Access Journals (Sweden)

    Ayatollahi Seyyed Mohammad Taghi

    2011-04-01

    Full Text Available Abstract Background Mixed effects logistic models have become a popular method for analyzing multicenter clinical trials with binomial data. However, the statistical properties of these models for testing homogeneity of odds ratios under various conditions, such as within-center and among-centers inequality, are still unknown and not yet compared with those of commonly used tests of homogeneity. Methods We evaluated the effect of within-center and among-centers inequality on the empirical power and type I error rate of the three homogeneity tests of odds ratios including likelihood ratio (LR test of a mixed logistic model, DerSimonian-Laird (DL statistic and Breslow-Day (BD test by simulation study. Moreover, the impacts of number of centers (K, number of observations in each center and amount of heterogeneity were investigated by simulation. Results As compared with the equal sample size design, the power of the three tests of homogeneity will decrease if the same total sample size, which can be allocated equally within one center or among centers, is allocated unequally. The average reduction in the power of these tests was up to 11% and 16% for within-center and among-centers inequality, respectively. Moreover, in this situation, the ranking of the power of the homogeneity tests was BD≥DL≥LR and the power of these tests increased with increasing K. Conclusions This study shows that the adverse effect of among-centers inequality on the power of the homogeneity tests was stronger than that of within-center inequality. However, the financial limitations make the use of unequal sample size designs inevitable in multicenter trials. Moreover, although the power of the BD is higher than that of the LR when K≤6, the proposed mixed logistic model is recommended when K≥8 due to its practical advantages.

  16. A log-linear model approach to estimation of population size using the line-transect sampling method

    Science.gov (United States)

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  17. Hierarchical distance-sampling models to estimate population size and habitat-specific abundance of an island endemic

    Science.gov (United States)

    Sillett, Scott T.; Chandler, Richard B.; Royle, J. Andrew; Kéry, Marc; Morrison, Scott A.

    2012-01-01

    Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural

  18. The inefficiency of re-weighted sampling and the curse of system size in high order path integration

    CERN Document Server

    Ceriotti, Michele; Riordan, Oliver; Manolopoulos, David E

    2011-01-01

    Computing averages over a target probability density by statistical re-weighting of a set of samples with a different distribution is a strategy which is commonly adopted in fields as diverse as atomistic simulation and finance. Here we present a very general analysis of the accuracy and efficiency of this approach, highlighting some of its weaknesses. We then give an example of how our results can be used, specifically to assess the feasibility of high-order path integral methods. We demonstrate that the most promising of these techniques -- which is based on re-weighted sampling -- is bound to fail as the size of the system is increased, because of the exponential growth of the statistical uncertainty in the re-weighted average.

  19. The influence of an energy groups number and mesh size on results of reactivity coefficients calculations for BN-600 benchmark core. Appendix 1

    International Nuclear Information System (INIS)

    This document presents the OKBM contribution to the analysis of a benchmark of BN-600 reactor hybrid core with simultaneous loading of uranium fuel and MOX fuel within the framework of the international IAEA Co-ordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. The purpose of the present document is the comparison of some obtained for the Phase 2 results using the different energy groups number and the different mesh point size. The CRP participants at calculation of a benchmark used different planar mesh point sizes. The axial mesh point size was not stipulated, but the mesh sizes were specified for the desired results representation. Therefore in some cases there was possible the application of rather large axial mesh size. The discrepancy in results because the different mesh point size using should be estimated. The results of some participants were obtained using the relatively small energy groups number - 6, 9, and 12. The influence of the energy group number on value of the obtained reactivity coefficients is analyzed in case of the OKBM calculations results. Besides in case of the sodium density reactivity coefficient the OKBM used method of the choice for the few group optimal division of the energy scale is shown. The probable additional uncertainties as the consequence of the baseless group division are estimated by the comparison of the group division schemes applied by different CRP participants

  20. A Size Exclusion HPLC Method for Evaluating the Individual Impacts of Sugars and Organic Acids on Beverage Global Taste by Means of Calculated Dose-Over-Threshold Values

    Directory of Open Access Journals (Sweden)

    Luís G. Dias

    2014-09-01

    Full Text Available In this work, the main organic acids (citric, malic and ascorbic acids and sugars (glucose, fructose and sucrose present in commercial fruit beverages (fruit carbonated soft-drinks, fruit nectars and fruit juices were determined. A novel size exclusion high performance liquid chromatography isocratic green method, with ultraviolet and refractive index detectors coupled in series, was developed. This methodology enabled the simultaneous quantification of sugars and organic acids without any sample pre-treatment, even when peak interferences occurred. The method was in-house validated, showing a good linearity (R > 0.999, adequate detection and quantification limits (20 and 280 mg L−1, respectively, satisfactory instrumental and method precisions (relative standard deviations lower than 6% and acceptable method accuracy (relative error lower than 5%. Sugars and organic acids profiles were used to calculate dose-over-threshold values, aiming to evaluate their individual sensory impact on beverage global taste perception. The results demonstrated that sucrose, fructose, ascorbic acid, citric acid and malic acid have the greater individual sensory impact in the overall taste of a specific beverage. Furthermore, although organic acids were present in lower concentrations than sugars, their taste influence was significant and, in some cases, higher than the sugars’ contribution towards the global sensory perception.

  1. Strategies on Sample Size Determination and Qualitative and Quantitative Traits Integration to Construct Core Collection of Rice (Oryza sativa)

    Institute of Scientific and Technical Information of China (English)

    LI Xiao-ling; LU Yong-gen; LI Jin-quan; Xu Hai-ming; Muhammad Qasim SHAHID

    2011-01-01

    The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management.Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections.The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions.The results showed that 50-225 nested core collections,whose sampling rate was 2.2%-9.9%,were sufficient to maintain the maximum genetic diversity of the initial collections.Of these,150 accessions (6.6%) could capture the maximal genetic diversity of the initial collection.Three data types,i.e.qualitative traits (QT1),quantitative traits (QT2) and integrated qualitative and quantitative traits (QTT),were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances.Every combining scheme constructed eight rice core collections (225,200,175,150,125,100,75 and 50).The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections.A core collection constructed only on the information of QT1 could not represent the initial collection effectively.QTT should be used together to construct a productive core collection.

  2. OECD EGBUC Benchmark VIII. Comparison of calculation codes and methods for the analysis of small-sample reactivity experiments

    International Nuclear Information System (INIS)

    Small-sample reactivity experiments are relevant to provide accurate information on the integral cross sections of materials. One of the specificities of these experiments is that the measured reactivity worth generally ranges between 1 and 10 pcm, which precludes the use of Monte Carlo for the analysis. As a consequence, several papers have been devoted to deterministic calculation routes, implying spatial and/or energetic discretization which could involve calculation bias. Within the Expert Group on Burn-Up Credit of the OECD/NEA, a benchmark was proposed to compare different calculation codes and methods for the analysis of these experiments. In four Sub-Phases with geometries ranging from a single cell to a full 3D core model, participants were asked to evaluate the reactivity worth due to the addition of small quantities of separated fission products and actinides into a UO2 fuel. Fourteen institutes using six different codes have participated in the Benchmark. For reactivity worth of more than a few tens of pcm, the Monte-Carlo approach based on the eigen-value difference method appears clearly as the reference method. However, in the case of reactivity worth as low as 1 pcm, it is concluded that the deterministic approach based on the exact perturbation formalism is more accurate and should be preferred. Promising results have also been reported using the newly available exact perturbation capability, developed in the Monte Carlo code TRIPOLI4, based on the calculation of a continuous energy adjoint flux in the reference situation, convoluted to the forward flux of the perturbed situation. (author)

  3. Sorption of water vapour by the Na+-exchanged clay-sized fractions of some tropical soil samples

    International Nuclear Information System (INIS)

    Water vapour sorption isotherms at 299K for the Na+-exchanged clay-sized (≤ 2μm e.s.d.) fraction of two sets of samples taken at three different depths from a tropical soil profile have been studied. One set of samples was treated (with H2O2) for the removal of much of the organic matter (OM); the other set (of the same samples) was not so treated. The isotherms obtained were all of type II and analyses by the BET method yielded values for the Specific Surface Areas (SSA) and for the average energy of adsorption of the first layer of adsorbate (Ea). OM content and SSA for the untreated samples were found to decrease with depth. Whereas removal of organic matter made negligible difference to the SSA of the top/surface soil, the same treatment produced a significant increase in the SSA of the samples taken from the middle and from the lower depths in the profile; the resulting increase was more pronounced for the subsoil. It has been deduced from these results that OM in the surface soil was less involved with the inorganic soil colloids than that in the subsoil. The increase in surface area which resulted from the removal of OM from the subsoil was most probably due to disaggregation. Values of Ea obtained show that for all the samples the adsorption of water vapour became more energetic after the oxidative removal of organic matter; the resulting ΔEa also increased with depth. This suggests that in the dry state, the ''cleaned'' surface of the inorganic soil colloids was more energetic than the ''organic-matter-coater surface''. These data provide strong support for the deduction that OM in the subsoil was in a more ''combined'' state than that in the surface soil. (author). 21 refs, 4 figs, 2 tabs

  4. Effect of $A$-site size difference on polar behavior in MBiScNbO6, (M=Na, K and Rb): Density functional calculations

    Energy Technology Data Exchange (ETDEWEB)

    Takagi, Shigeyuki M [ORNL; Subedi, Alaska P [ORNL; Cooper, Valentino R [ORNL; Singh, David J [ORNL

    2010-01-01

    We investigate the effect of $A$-site size differences in the double perovskites BiScO$_3$-$M$NbO$_3$ ($M$$=$Na, K and Rb) using first-principles calculations. We find that the polarization of these materials is 70$\\sim$90 $\\mu$C/cm$^2$ along the rhombohedral direction. The main contribution to the high polarization comes from large off-centerings of Bi ions, which are strongly enhanced by the suppression of octahedral tilts as the $M$ ion size increases. A high Born effective charge of Nb also contributes to the polarization and this contribution is also enhanced by increasing the $M$ ion size.

  5. On the sample size requirement in genetic association tests when the proportion of false positives is controlled.

    Science.gov (United States)

    Zou, Guohua; Zuo, Yijun

    2006-01-01

    With respect to the multiple-tests problem, recently an increasing amount of attention has been paid to control the false discovery rate (FDR), the positive false discovery rate (pFDR), and the proportion of false positives (PFP). The new approaches are generally believed to be more powerful than the classical Bonferroni one. This article focuses on the PFP approach. It demonstrates via examples in genetic association studies that the Bonferroni procedure can be more powerful than the PFP-control one and also shows the intrinsic connection between controlling the PFP and controlling the overall type I error rate. Since controlling the PFP does not necessarily lead to a desired power level, this article addresses the design issue and recommends the sample sizes that can attain the desired power levels when the PFP is controlled. The results in this article also provide rough guidance for the sample sizes to achieve the desired power levels when the FDR and especially the pFDR are controlled. PMID:16204206

  6. Effects of sample size on the second magnetization peak in Bi2Sr2CaCuO8+ at low temperatures

    Indian Academy of Sciences (India)

    B Kalisky; A Shaulov; Y Yeshurun

    2006-01-01

    Effects of sample size on the second magnetization peak (SMP) in Bi2Sr2CaCuO8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in the order-disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order-disorder transition induction in samples of different size.

  7. Effect of Gap Size on Coating Extrusion of Pb-GF Composite Wire by Theoretical Calculation and Experimental Investigation

    Institute of Scientific and Technical Information of China (English)

    Wenbin FANG; Hongfei SUN; Erde WANG; Yaohong GENG

    2005-01-01

    A new method using lead coated glass fiber to produce continuous wire for battery grid of electric vehicles (EVs)and hybrid electric vehicles (HEVs) was introduced. Under equal flow, both the maximum and minimum theoretical value of gap size were studied and estimation equation was established. The experimental results show that the gap size is a key parameter for the continuous coating extrusion process. Its maximum value (Hmax) is 0.24 mm and the minimum one (Hmin) is 0.12 mm. At a gap size of 0.18 mm, the maximum of metal extrusion per unit of time and optimal coating speed could be obtained.

  8. On realistic size equivalence and shape of spheroidal Saharan mineral dust particles applied in solar and thermal radiative transfer calculations

    OpenAIRE

    Otto, S.; Trautmann, T.; M. Wendisch

    2011-01-01

    Realistic size equivalence and shape of Saharan mineral dust particles are derived from in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006), dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface ...

  9. On realistic size equivalence and shape of spheroidal Saharan mineral dust particles applied in solar and thermal radiative transfer calculations

    OpenAIRE

    Otto, S.; Trautmann, T.; M. Wendisch

    2010-01-01

    Realistic size equivalence and shape of Saharan mineral dust particles are derived from on in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006), dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surfa...

  10. How Many Conformations of Enzymes Should Be Sampled for DFT/MM Calculations? A Case Study of Fluoroacetate Dehalogenase.

    Science.gov (United States)

    Li, Yanwei; Zhang, Ruiming; Du, Likai; Zhang, Qingzhu; Wang, Wenxing

    2016-01-01

    The quantum mechanics/molecular mechanics (QM/MM) method (e.g., density functional theory (DFT)/MM) is important in elucidating enzymatic mechanisms. It is indispensable to study "multiple" conformations of enzymes to get unbiased energetic and structural results. One challenging problem, however, is to determine the minimum number of conformations for DFT/MM calculations. Here, we propose two convergence criteria, namely the Boltzmann-weighted average barrier and the disproportionate effect, to tentatively address this issue. The criteria were tested by defluorination reaction catalyzed by fluoroacetate dehalogenase. The results suggest that at least 20 conformations of enzymatic residues are required for convergence using DFT/MM calculations. We also tested the correlation of energy barriers between small QM regions and big QM regions. A roughly positive correlation was found. This kind of correlation has not been reported in the literature. The correlation inspires us to propose a protocol for more efficient sampling. This saves 50% of the computational cost in our current case. PMID:27556449

  11. How Many Conformations of Enzymes Should Be Sampled for DFT/MM Calculations? A Case Study of Fluoroacetate Dehalogenase

    Directory of Open Access Journals (Sweden)

    Yanwei Li

    2016-08-01

    Full Text Available The quantum mechanics/molecular mechanics (QM/MM method (e.g., density functional theory (DFT/MM is important in elucidating enzymatic mechanisms. It is indispensable to study “multiple” conformations of enzymes to get unbiased energetic and structural results. One challenging problem, however, is to determine the minimum number of conformations for DFT/MM calculations. Here, we propose two convergence criteria, namely the Boltzmann-weighted average barrier and the disproportionate effect, to tentatively address this issue. The criteria were tested by defluorination reaction catalyzed by fluoroacetate dehalogenase. The results suggest that at least 20 conformations of enzymatic residues are required for convergence using DFT/MM calculations. We also tested the correlation of energy barriers between small QM regions and big QM regions. A roughly positive correlation was found. This kind of correlation has not been reported in the literature. The correlation inspires us to propose a protocol for more efficient sampling. This saves 50% of the computational cost in our current case.

  12. Pretreatment of Soil Samples Rich in Short-Range-Order Minerals Before Particle-Size Analysis by the Pipette Method

    Institute of Scientific and Technical Information of China (English)

    K.ALARY; D.BABRE; L.CANER; F.FEDER; M.SZWARC; M.NAUDAN; G.BOURGEON

    2013-01-01

    The possibilities of combining the dissolution of short-range-order minerals (SROMs) like allophane and imogolite,by ammonium oxalate and a particle size distribution analysis performed by the pipette method were investigated by tests on a soil sample from Reunion,a volcanic island located in the Indian Ocean,having a large SROMs content.The need to work with moist soil samples was again emphasized because the microaggregates formed during air-drying are resistant to the reagent.The SROM content increased,but irregularly,with the number of dissolutions by ammonium oxalate:334 and 470 mg g-1 of SROMs were dissolved after one and three dissolutions respectively.Six successive dissolutions with ammonium oxalate on the same soil sample showed that 89% of the sum of oxides extracted by the 6 dissolutions were extracted by the first dissolution (mean 304 mg g-1).A compromise needs to be found between the total removal of SROMs by large quantities of ammonium oxalate and the preservation of clay minerals,which were unexpectedly dissolved by this reagent.These tests enabled a description of the clay assemblage of the soil (gibbsite,smectite,and traces of kaolinite) in an area where such information was lacking due to the difficulties encountered in recupcration of the clay fraction.

  13. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  14. Characterization of urban aerosols and their hazard assessment, by size sampling combined with inter-element ratios

    International Nuclear Information System (INIS)

    Multielement composition of aerosols and their particle-size distributions can be used to deduce some of the more prominent likely sources of inorganic components of air pollution. This is possible because it can be inferred that at two ends of the scale, elements associated with small particles (110μm) are likely to arise from wind action on soils, deposited dusts and fugitive emissions from dust-producing operations. Aerosol samples collected at various locations near industrialized urban areas of high population and traffic density were analysed in various particle-size fractions for Al, Sb, As, Br, Ca, Cl, Cr, Co, Cu, I, Fe, La, Pb, Mg, Mn, Hg, Ni, K, Sm, Sc, Na, Ti, V, Zn and Zr using instrumental neutron and photon activation techniques as well as atomic absorption. Air sampling was done with an integrating high-volume sampler and with a five stage Andersen (Hi-Vol) sampler. Many elements associated with larger aerosols (viz. Al, Ca, Fe, La, Mg, Sm, Sc, Na, and Ti) appeared to be mainly soil-derived, whereas some elements associated with the smaller size fractions (e.g. Sb, As, Br, Cl, Pb, V and Zn) were abormally enriched in the atmosphere compared to typical terrestrial abundances and appeared to have significant anthropic sources: industrial, municipal and domestic. These conclusions were supported by examination of the size dependence of some inter-element ratios such as Sc/Al, V/Al, and Mn/Al. In the case of several elements, competing sources could also be distinguished quantitatively by inter-element ratios. In order to assess the relative hazard that such airborne components could pose to human populations, analyses have been made of local dustfall and soil contamination, and also of the concentrations of many of the more significant elements in scalp hair. Hair concentrations of a total of 243 rural, urban and exposed persons are compared and these indicate increases in some toxic metals considerably higher for exposed groups than the upper limit of

  15. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    Science.gov (United States)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  16. Estimation of proportions of objects and determination of training sample-size in a remote sensing application

    Science.gov (United States)

    Chhikara, R. S.; Odell, P. L.

    1973-01-01

    A multichannel scanning device may fail to observe objects because of obstructions blocking the view, or different categories of objects may make up a resolution element giving rise to a single observation. Ground truth will be required on any such categories of objects in order to estimate their expected proportions associated with various classes represented in the remote sensing data. Considering the classes to be distributed as multivariate normal with different mean vectors and common covariance, maximum likelihood estimates are given for the expected proportions of objects associated with different classes, using the Bayes procedure for classification of individuals obtained from these classes. An approximate solution for simultaneous confidence intervals on these proportions is given, and thereby a sample-size needed to achieve a desired amount of accuracy for the estimates is determined.

  17. Pulse Stripping Analysis: A Technique for Determination of Some Metals in Aerosols and Other Limited Size Samples

    Science.gov (United States)

    Parry, Edward P.; Hern, Don H.

    1971-01-01

    A technique for determining lead with a detection limit down to a nanogram on limited size samples is described. The technique is an electrochemical one and involves pre-concentration of the metal species in a mercury drop. Although the emphasis in this paper is on the determination of lead, many metal ion species which are reducible to the metal at an electrode are equally determinable. A technique called pulse polarography is proposed to determine the metals in the drop and this technique is discussed and is compared with other techniques. Other approaches for determination of lead are also compared. Some data are also reported for the lead content of Ventura County particulates. The characterization of lead species by solubility parameters is discussed.

  18. Experimental design and sample size determination for testing synergism in drug combination studies based on uniform measures.

    Science.gov (United States)

    Tan, Ming; Fang, Hong-Bin; Tian, Guo-Liang; Houghton, Peter J

    2003-07-15

    In anticancer drug development, the combined use of two drugs is an important strategy to achieve greater therapeutic success. Often combination studies are performed in animal (mostly mice) models before clinical trials are conducted. These experiments on mice are costly, especially with combination studies. However, experimental designs and sample size derivations for the joint action of drugs are not currently available except for a few cases where strong model assumptions are made. For example, Abdelbasit and Plackett proposed an optimal design assuming that the dose-response relationship follows some specified linear models. Tallarida et al. derived a design by fixing the mixture ratio and used a t-test to detect the simple similar action. The issue is that in reality we usually do not have enough information on the joint action of the two compounds before experiment and to understand their joint action is exactly our study goal. In this paper, we first propose a novel non-parametric model that does not impose such strong assumptions on the joint action. We then propose an experimental design for the joint action using uniform measure in this non-parametric model. This design is optimal in the sense that it reduces the variability in modelling synergy while allocating the doses to minimize the number of experimental units and to extract maximum information on the joint action of the compounds. Based on this design, we propose a robust F-test to detect departures from the simple similar action of two compounds and a method to determine sample sizes that are economically feasible. We illustrate the method with a study of the joint action of two new anticancer agents: temozolomide and irinotecan. PMID:12820275

  19. Delineamento experimental e tamanho de amostra para alface cultivada em hidroponia Experimental design and sample size for hydroponic lettuce crop

    Directory of Open Access Journals (Sweden)

    Valéria Schimitz Marodim

    2000-10-01

    Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.

  20. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    DEFF Research Database (Denmark)

    Haugbøl, Steven; Pinborg, Lars H; Arfan, Haroon M; Frøkjaer, Vibe M; Madsen, Jacob; Dyrby, Tim; Svarer, Claus; Knudsen, Gitte M

    2006-01-01

    PURPOSE: To determine the reproducibility of measurements of brain 5-HT2A receptors with an [18F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects...... reproducibility and the required sample size. METHODS: For assessment of the variability, six subjects were investigated with [18F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [18F]altanserin PET studies in 84 healthy subjects......% (range 5-12%), whereas in regions with a low receptor density, BP1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high...