Noordzij, Marlies; Dekker, Friedo W.; Zoccali, Carmine; Jager, Kitty J.
The sample size is the number of patients or other experimental units that need to be included in a study to answer the research question. Pre-study calculation of the sample size is important; if a sample size is too small, one will not be able to detect an effect, while a sample that is too large may be a waste of time and money. Methods to calculate the sample size are explained in statistical textbooks, but because there are many different formulas available, it can be difficult for inves...
Pourhoseingholi, Mohamad Amin; Vahedi, Mohsen; Rahimzadeh, Mitra
Optimum sample size is an essential component of any research. The main purpose of the sample size calculation is to determine the number of samples needed to detect significant changes in clinical parameters, treatment effects or associations after data gathering. It is not uncommon for studies to be underpowered and thereby fail to detect the existing treatment effects due to inadequate sample size. In this paper, we explain briefly the basic principles of sample size calculations in medica...
Kim, Jeehyoung; Seo, Bong Soo
Why Calculating the sample size is essential to reduce the cost of a study and to prove the hypothesis effectively. How Referring to pilot studies and previous research studies, we can choose a proper hypothesis and simplify the studies by using a website or Microsoft Excel sheet that contains formulas for calculating sample size in the beginning stage of the study. More There are numerous formulas for calculating the sample size for complicated statistics and studies, but most studies can us...
Whitley, Elise; Ball, Jonathan
The present review introduces the notion of statistical power and the hazard of under-powered studies. The problem of how to calculate an ideal sample size is also discussed within the context of factors that affect power, and specific methods for the calculation of sample size are presented for two common scenarios, along with extensions to the simplest case.
The problem is to determine inspection sample sizes for a given stratum. The sample sizes are based on applying the verification data in an attributes mode such that detection consists of identifying one or more defects in the sample. The sample sizes are such that the probability of detection is no less than the design value, 1-β, for all of the values of the defect size when, in fact, it is possible to achieve this detection probability without an unreasonable number of verification samples. A computing algorithm is developed to address the problem. Up to three measurement methods, or measuring instruments, are accommodated by the algorithm. The algorithm is optimal in the sense that an initial set of sample sizes is found to ensure a detection probability of 1-β at those defect sizes that result in the smallest numbers of samples for the more precise measurement methods. The detection probability is then calculated for a range of defect sizes covering the entire range of possibilities, and an iterative procedure is applied until the detection probability is no less than 1-β (if possible) at its maximum value. The algorithm, while not difficult in concept, realistically requires a personal computer (PC) to implement. For those instances when a PC may not be available, approximation formulas are developed which permit sample size calculations using only a pocket calculator. (author). Refs and tabs
To design clinical trials, efficiency, ethics, cost effectively, research duration and sample size calculations are the key things to remember. This review highlights the statistical issues to estimate the sample size requirement. It elaborates the theory, methods and steps for the sample size calculation in randomized controlled trials. It also emphasizes that researchers should consider the study design first and then choose appropriate sample size calculation method.
Charles, Pierre; Giraudeau, Bruno; Dechartres, Agnes; Baron, Gabriel; Ravaud, Philippe
Objectives To assess quality of reporting of sample size calculation, ascertain accuracy of calculations, and determine the relevance of assumptions made when calculating sample size in randomised controlled trials. Design Review. Data sources We searched MEDLINE for all primary reports of two arm parallel group randomised controlled trials of superiority with a single primary outcome published in six high impact factor general medical journals between 1 January 2005 and 31 December 2006. All...
Jaykaran Charan; N D Kantharia
Calculation of sample size is one of the important component of design of any research including animal studies. If a researcher select less number of animals it may lead to missing of any significant difference even if it exist in population and if more number of animals selected then it may lead to unnecessary wastage of resources and may lead to ethical issues. In this article, on the basis of review of literature done by us we suggested few methods of sample size calculations for animal s...
Carley, S; Dosman, S; Jones, S; Harrison, M
Objectives: To produce an easily understood and accessible tool for use by researchers in diagnostic studies. Diagnostic studies should have sample size calculations performed, but in practice, they are performed infrequently. This may be due to a reluctance on the part of researchers to use mathematical formulae.
Munjal, Aarti; Sakhadeo, Uttara R.; Keith E. Muller; Glueck, Deborah H.; Kreidler, Sarah M
Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical soft...
Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.
Weeks, Scott; Atlas, Alvin
A priori sample size calculations are used to determine the adequate sample size to estimate the prevalence of the target population with good precision. However, published audits rarely report a priori calculations for their sample size. This article discusses a process in health services delivery mapping to generate a comprehensive sampling frame, which was used to calculate an a priori sample size for a targeted clinical record audit. We describe how we approached methodological and defini...
Zhang, Song; Ahn, Chul
Sample size calculations based on two-sample comparisons of slopes in repeated measurements have been reported by many investigators. In contrast, the literature has paid relatively little attention to the sample size calculations for time-averaged differences in the presence of missing data in repeated measurements studies. Diggle et al. (2002) provided a sample size formula comparing time-averaged differences for continuous outcomes in repeated measurement studies assuming no missing data a...
Jensen, Jens-Ulrik; Lundgren, Bettina; Hein, Lars;
and signs may present atypically. The established biological markers of inflammation (leucocytes, C-reactive protein) may often be influenced by other parameters than infection, and may be unacceptably slowly released after progression of an infection. At the same time, lack of a relevant...... hypertriglyceridaemia, 2) Likely that safety is compromised by blood sampling, 3) Pregnant or breast feeding.Computerized Randomisation: Two arms (1:1), n = 500 per arm: Arm 1: standard of care. Arm 2: standard of care and Procalcitonin guided diagnostics and treatment of infection.Primary Trial Objective: To address......-guided strategy compared to the best standard of care, is conducted in an Intensive care setting. Results will, with a high statistical power answer the question: Can the survival of critically ill patients be improved by actively using biomarker procalcitonin in the treatment of infections? 700 critically ill...
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for NYTD... applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...
Ahn, Chul; Hu, Fan; Schucany, William R.
We propose a sample size calculation approach for testing a proportion using the weighted sign test when binary observations are dependent within a cluster. Sample size formulas are derived with nonparametric methods using three weighting schemes: equal weights to observations, equal weights to clusters, and optimal weights that minimize the variance of the estimator. Sample size formulas are derived incorporating intracluster correlation and the variability in cluster sizes. Simulation studi...
Weeks, Scott; Atlas, Alvin
A priori sample size calculations are used to determine the adequate sample size to estimate the prevalence of the target population with good precision. However, published audits rarely report a priori calculations for their sample size. This article discusses a process in health services delivery mapping to generate a comprehensive sampling frame, which was used to calculate an a priori sample size for a targeted clinical record audit. We describe how we approached methodological and definitional issues in the following steps: (1) target population definition, (2) sampling frame construction, and (3) a priori sample size calculation. We recommend this process for clinicians, researchers, or policy makers when detailed information on a reference population is unavailable. PMID:26122044
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention. PMID:25019136
Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary
Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size. PMID:23934314
Fu, Yingkun; Xie, Yanming
In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research. PMID:22292397
Krishnamoorthy, K.; Xia, Yanping
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Tavernier, Elsa; Trinquart, Ludovic; Giraudeau, Bruno
Sample sizes for randomized controlled trials are typically based on power calculations. They require us to specify values for parameters such as the treatment effect, which is often difficult because we lack sufficient prior information. The objective of this paper is to provide an alternative design which circumvents the need for sample size calculation. In a simulation study, we compared a meta-experiment approach to the classical approach to assess treatment efficacy. The meta-experiment approach involves use of meta-analyzed results from 3 randomized trials of fixed sample size, 100 subjects. The classical approach involves a single randomized trial with the sample size calculated on the basis of an a priori-formulated hypothesis. For the sample size calculation in the classical approach, we used observed articles to characterize errors made on the formulated hypothesis. A prospective meta-analysis of data from trials of fixed sample size provided the same precision, power and type I error rate, on average, as the classical approach. The meta-experiment approach may provide an alternative design which does not require a sample size calculation and addresses the essential need for study replication; results may have greater external validity. PMID:27362939
Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.
Comulada, W. Scott; Weiss, Robert E.
The analysis of a baseline predictor with a longitudinally measured outcome is well established and sample size calculations are reasonably well understood. Analysis of bivariate longitudinally measured outcomes is gaining in popularity and methods to address design issues are required. The focus in a random effects model for bivariate longitudinal outcomes is on the correlations that arise between the random effects and between the bivariate residuals. In the bivariate random effects model, ...
Candel, Math J J M; van Breukelen, Gerard J P
When comparing two different kinds of group therapy or two individual treatments where patients within each arm are nested within care providers, clustering of observations may occur in both arms. The arms may differ in terms of (a) the intraclass correlation, (b) the outcome variance, (c) the cluster size, and (d) the number of clusters, and there may be some ideal group size or ideal caseload in case of care providers, fixing the cluster size. For this case, optimal cluster numbers are derived for a linear mixed model analysis of the treatment effect under cost constraints as well as under power constraints. To account for uncertain prior knowledge on relevant model parameters, also maximin sample sizes are given. Formulas for sample size calculation are derived, based on the standard normal as the asymptotic distribution of the test statistic. For small sample sizes, an extensive numerical evaluation shows that in a two-tailed test employing restricted maximum likelihood estimation, a safe correction for both 80% and 90% power, is to add three clusters to each arm for a 5% type I error rate and four clusters to each arm for a 1% type I error rate. PMID:25519890
Luh, Wei-Ming; Guo, Jiin-Huarng
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A
The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26707831
The relaxation of restrictions on the type of professions that can report films has resulted in radiographers and other healthcare professionals becoming increasingly involved in image interpretation in areas such as mammography, ultrasound and plain-film radiography. Little attention, however, has been given to sample size determinations concerning film-reading performance characteristics such as sensitivity, specificity and accuracy. Illustrated with hypothetical examples, this paper begins by considering standard errors and confidence intervals for performance characteristics and then discusses methods for determining sample size for studies of film-reading performance. Used appropriately, these approaches should result in studies that produce estimates of film-reading performance with adequate precision and enable investigators to optimize the sample size in their studies for the question they seek to answer. Scally, A. J. and Brealey S. (2003). Clinical Radiology 58, 238-246
Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.;
OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved in 1...
In this paper I address the question - how large is a phylogenetic sample I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes - the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find...
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
Background: Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rw...
Nicholas G Reich
Full Text Available In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of cluster-randomized and cluster-randomized crossover trial design: estimating statistical power. We present a general framework for estimating power via simulation in cluster-randomized studies with or without one or more crossover periods. We have implemented this framework in the clusterPower software package for R, freely available online from the Comprehensive R Archive Network. Our simulation framework is easy to implement and users may customize the methods used for data analysis. We give four examples of using the software in practice. The clusterPower package could play an important role in the design of future cluster-randomized and cluster-randomized crossover studies. This work is the first to establish a universal method for calculating power for both cluster-randomized and cluster-randomized clinical trials. More research is needed to develop standardized and recommended methodology for cluster-randomized crossover studies.
Finch Stephen J
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
Desu, M M
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Dong, Nianbo; Maynard, Rebecca
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Naing, Nyi Nyi
There is a particular importance of determining a basic minimum required ‘n’ size of the sample to recognize a particular measurement of a particular population. This article has highlighted the determination of an appropriate size to estimate population parameters.
Full Text Available Abstract Background Many patients with diabetes mellitus (DM require a combination of antidiabetic drugs with complementary mechanisms of action to lower their hemoglobin A1c levels to achieve therapeutic targets and reduce the risk of cardiovascular complications. Linagliptin is a novel member of the dipeptidyl peptidase-4 (DPP-4 inhibitor class of antidiabetic drugs. DPP-4 inhibitors increase incretin (glucagon-like peptide-1 and gastric inhibitory polypeptide levels, inhibit glucagon release and, more importantly, increase insulin secretion and inhibit gastric emptying. Currently, phase III clinical studies with linagliptin are underway to evaluate its clinical efficacy and safety. Linagliptin is expected to be one of the most appropriate therapies for Japanese patients with DM, as deficient insulin secretion is a greater concern than insulin resistance in this population. The number of patients with DM in Japan is increasing and this trend is predicted to continue. Several antidiabetic drugs are currently marketed in Japan; however there is no information describing the effective dose of linagliptin for Japanese patients with DM. Methods This prospective, randomized, double-blind study will compare linagliptin with placebo over a 12-week period. The study has also been designed to evaluate the safety and efficacy of linagliptin by comparing it with another antidiabetic, voglibose, over a 26-week treatment period. Four treatment groups have been established for these comparisons. A phase IIb/III combined study design has been utilized for this purpose and the approach for calculating sample size is described. Discussion This is the first phase IIb/III study to examine the long-term safety and efficacy of linagliptin in diabetes patients in the Japanese population. Trial registration Clinicaltrials.gov (NCT00654381.
Full Text Available The purpose of the project is to find the optimal value for the Economic Order Quantity Model and then use a lean manufacturing Kanban equation to find a numeric value that will minimize the total cost and the inventory size.
Rakesh R. Pathak
Full Text Available Sample size formulae need some input data or to say it otherwise we need some parameters to calculate sample size. This second part on the formula explanation gives ideas of Z, population size, precision of error, standard deviation, contingency etc which influence sample size. [Int J Basic Clin Pharmacol 2013; 2(1.000: 94-95
Wang, Duolao; Bakhai, Ameet; Del Buono, Angelo; Maffulli, Nicola
Calculating the sample size is a most important determinant of statistical power of a study. A study with inadequate power, unless being conducted as a safety and feasibility study, is unethical. However, sample size calculation is not an exact science, and therefore it is important to make realistic and well researched assumptions before choosing an appropriate sample size accounting for dropouts and also including a plan for interim analyses during the study to amend the final sample size.
This review basically provided a conceptual framework for sample size calculation in epidemiologic studies with various designs and outcomes. The formula requirement of sample size was drawn based on statistical principles for both descriptive and comparative studies. The required sample size was estimated and presented graphically with different effect sizes and power of statistical test at 95% confidence level. This would help the clinicians to decide and ascertain a suitable sample size in...
Hinh, Peter; Canfield, Steven E
Understanding sample size calculation is vitally important for planning and conducting clinical research, and critically appraising literature. The purpose of this paper is to present basic statistical concepts and tenets of study design pertaining to calculation of requisite sample size. This paper also discusses the significance of sample size calculation in the context of ethical considerations. Scenarios applicable to urology are utilized in presenting concepts.
Vithal K Dhulkhed
Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.
Body frame size is determined by a person's wrist circumference in relation to his height. For example, a man ... would fall into the small-boned category. Determining frame size: To determine the body frame size, measure ...
Earlier INMM paper have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, the authors have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed, and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments
Zhou, Mingyuan; Walker, Stephen G.
Motivated by the fundamental problem of measuring species diversity, this paper introduces the concept of a cluster structure to define an exchangeable cluster probability function that governs the joint distribution of a random count and its exchangeable random partitions. A cluster structure, naturally arising from a completely random measure mixed Poisson process, allows the probability distribution of the random partitions of a subset of a sample to be dependent on the sample size, a dist...
evidence of widespread resistance to the utilisation of larval therapy from patients regardless of the method of larval therapy containment. These methods have the potential to inform sample size calculations where there are concerns of patient acceptability.
Carlos Fabián Flórez Valero
Full Text Available Using a percentage of a city’s households is a common practice in transport engineering leading to knowing the inhabit- ants’ lourney pattern. The procedure theoretically consists of calculating the sample based on the statistical parameters of population variable which one wishes to measure. This requires carrying out a pilot survey which cannot be done in countries having few resources because of the costs involved in knowing the value of such population parameters, because resources are sometimes exclusively destined to making an estimated sample according to a pre-established percentage. Percentages between 3% and 6% are usually used in Colombian cities, depending on population size. The city of Manizales (located 300 km to the west of Colombia’s capital carried out two household surveys in less than four years; when the second survey was carried out the values of the estimator parameters were thus already known. The Manizales’ mayor’s office made an agreement with the Universidad Nacional de Colombia for drawing up the new origin-destiny matrix, where it was possible to calculate the sample based on the pertinent statistical variables. The article makes a comparative analysis of both methodologies, concluding that when statistically estimating the sample it is possible to greatly reduce the number of surveys to be carried out, but obtaining practically equal results.
Darby, John L.
We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.
Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons.
Two different Monte Carlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard Monte Carlo perturbation method in which neutrons are steered towards the sample by roulette and splitting. One finds, however, that two variance reduction methods are required to make this sort of perturbation calculation feasible. First, neutrons that have passed through the sample must be exempted from roulette. Second, neutrons must be forced to undergo scattering collisions in the sample. Even when such methods are invoked, however, it is still necessary to exaggerate the volume fraction of the sample by drastically reducing the size of the core. The benchmark calculations are then used to test more approximate methods, and not directly to analyze experiments. In the second method the flux at the surface of the sample is assumed to be known. Neutrons entering the sample are drawn from this known flux and tracking by Monte Carlo. The effect of the sample or the fission rate is then inferred from the histories of these neutrons. The characteristics of both of these methods are explored empirically
Vokoun, J.C.; Rabeni, C.F.; Stanovick, J.S.
A method with an accompanying computer program is described to estimate the number of individuals needed to construct a sample length-frequency with a given accuracy and precision. First, a reference length-frequency assumed to be accurate for a particular sampling gear and collection strategy was constructed. Bootstrap procedures created length-frequencies with increasing sample size that were randomly chosen from the reference data and then were compared with the reference length-frequency by calculating the mean squared difference. Outputs from two species collected with different gears and an artificial even length-frequency are used to describe the characteristics of the method. The relations between the number of individuals used to construct a length-frequency and the similarity to the reference length-frequency followed a negative exponential distribution and showed the importance of using 300-400 individuals whenever possible.
Schönbrodt, Felix D.; Perugini, Marco
Sample correlations converge to the population value with increasing sample size, but the estimates are often inaccurate in small samples. In this report we use Monte-Carlo simulations to determine the critical sample size from which on the magnitude of a correlation can be expected to be stable. The necessary sample size to achieve stable estimates for correlations depends on the effect size, the width of the corridor of stability (i.e., a corridor around the true value where deviations are ...
Full Text Available Among the questions that a researcher should ask when planning a study is "How large a sample do I need?" If the sample size is too small, even a well conducted study may fail to answer its research question, may fail to detect important effects or associations, or may estimate those effects or associations too imprecisely. Similarly, if the sample size is too large, the study will be more difficult and costly, and may even lead to a loss in accuracy. Hence, optimum sample size is an essential component of any research. When the estimated sample size can not be included in a study, post-hoc power analysis should be carried out. Approaches for estimating sample size and performing power analysis depend primarily on the study design and the main outcome measure of the study. There are distinct approaches for calculating sample size for different study designs and different outcome measures. Additionally, there are also different procedures for calculating sample size for two approaches of drawing statistical inference from the study results, i.e. confidence interval approach and test of significance approach. This article describes some commonly used terms, which need to be specified for a formal sample size calculation. Examples for four procedures (use of formulae, readymade tables, nomograms, and computer software, which are conventionally used for calculating sample size, are also given
Tushar Vijay Sakpal
Every clinical trial should be planned. This plan should include the objective of trial, primary and secondary end-point, method of collecting data, sample to be included, sample size with scientific justification, method of handling data, statistical methods and assumptions. This plan is termed as clinical trial protocol. One of the key aspects of this protocol is sample size estimation. The aim of this article is to discuss how important sample size estimation is for a clinical trial, and a...
Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R
This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...
Štefancová, Lucia; Schwarz, Jaroslav; Maenhaut, W.; Smolík, Jiří
Praha: Česká aerosolová společnost, 2008, s. 29-30. ISBN 978-80-86186-17-7. [konference České aerosolové společnosti /9./. Praha (CZ), 04.12.2008] R&D Projects: GA MŠk OC 106; GA MŠk ME 941 Institutional research plan: CEZ:AV0Z40720504 Keywords : mass size distribution * urban aerosol * cascade impactor Subject RIV: CF - Physical ; Theoretical Chemistry http://cas.icpf.cas.cz/download/Sbornik_VKCAS_2008.pdf
Bones Atle M; Midelfart Herman; Jørstad Tommy S
Abstract Background Choosing the appropriate sample size is an important step in the design of a microarray experiment, and recently methods have been proposed that estimate sample sizes for control of the False Discovery Rate (FDR). Many of these methods require knowledge of the distribution of effect sizes among the differentially expressed genes. If this distribution can be determined then accurate sample size requirements can be calculated. Results We present a mixture model approach to e...
Amirkhanyan, V R
In this paper the next attempt is made to clarify the nature of the Euclidean behavior of the boundary in the angular size-redshift cosmological test. It is shown experimentally that this can be explained by the selection determined by anisotropic morphology and anisotropic radiation of extended radio sources. A catalogue of extended radio sources with minimal flux densities of about 0.01 Jy at 1.4 GHz was compiled for conducting the test. Without the assumption of their size evolution, the agreement between the experiment and calculation was obtained both in the Lambda CDM model (Omega_m=0.27 , Omega_v=0.73.) and the Friedman model (Omega = 0.1 ).
R. Eric Heidel
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of t...
This article suggests how to explain a problem of small sample size when considering correlation between two Normal variables. Two techniques are shown: one based on graphs and the other on simulation. (Contains 3 figures and 1 table.)
Buffo, A.; Alopaeus, V.
The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used.
Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.
Weinberg, M. C.
The effect of finite sample size on the kinetic law of phase transformations is considered. The case where the second phase develops by a nucleation and growth mechanism is treated under the assumption of isothermal conditions and constant and uniform nucleation rate. It is demonstrated that for spherical particle growth, a thin sample transformation formula given previously is an approximate version of a more general transformation law. The thin sample approximation is shown to be reliable when a certain dimensionless thickness is small. The latter quantity, rather than the actual sample thickness, determines when the usual law of transformation kinetics valid for bulk (large dimension) samples must be modified.
Heidel, R Eric
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power. PMID:27073717
This article presents a fairly quick, easy way to decide what size sample is needed for a survey, given a desired level of confidence and degree of accuracy. The method proposed will work on any multiple response instrument. A technical mathematical explanation is also included. (GK)
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Figueroa Rosa L
Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Navard, S. E.
The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.
The parametric uncertainties of DNBR and exit quality were calculated using sampling statistical method based on Wilks formula and VIPRE-W code. Then the DNBR design limit and exit quality limit were got by combining with the uncertainties of models and DNB correlation. This method can gain the more DNBR margin than RTDP methodology which is developed by Westinghouse by comparison of these two methods. (authors)
Hade, Erinn; Jarjoura, David; Wei, Lai
Background During the recruitment phase of a randomized breast cancer trial, investigating the time to recurrence, we found evidence that the failure probabilities used at the design stage were too high. Since most of the methodological research involving sample size re-estimation has focused on normal or binary outcomes, we developed a method which preserves blinding to re-estimate sample size in our time to event trial. Purpose A mistakenly high estimate of the failure rate at the design stage may reduce the power unacceptably for a clinically important hazard ratio. We describe an ongoing trial and an application of a sample size re-estimation method that combines current trial data with prior trial data or assumes a parametric model to re-estimate failure probabilities in a blinded fashion. Methods Using our current blinded trial data and additional information from prior studies, we re-estimate the failure probabilities to be used in sample size re-calculation. We employ bootstrap resampling to quantify uncertainty in the re-estimated sample sizes. Results At the time of re-estimation data from 278 patients was available, averaging 1.2 years of follow up. Using either method, we estimated an increase of 0 for the hazard ratio proposed at the design stage. We show that our method of blinded sample size re-estimation preserves the Type I error rate. We show that when the initial guess of the failure probabilities are correct; the median increase in sample size is zero. Limitations Either some prior knowledge of an appropriate survival distribution shape or prior data is needed for re-estimation. Conclusions In trials when the accrual period is lengthy, blinded sample size re-estimation near the end of the planned accrual period should be considered. In our examples, when assumptions about failure probabilities and HRs are correct the methods usually do not increase sample size or otherwise increase it by very little. PMID:20392786
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
... 7 Agriculture 3 2010-01-01 2010-01-01 false Size of sample. 201.43 Section 201.43 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.43 Size of sample. The following are minimum sizes..., ryegrass, bromegrass, millet, flax, rape, or seeds of similar size. (c) One pound (454 grams) of...
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample unit size. 52.803 Section 52.803 Agriculture... United States Standards for Grades of Frozen Red Tart Pitted Cherries Sample Unit Size § 52.803 Sample unit size. Compliance with requirements for size and the various quality factors is based on...
... 7 Agriculture 2 2010-01-01 2010-01-01 false Standard sample unit size. 52.3757 Section 52.3757..., Types, Styles, and Grades § 52.3757 Standard sample unit size. Compliance with requirements for the various quality factors except “size designation” is based on the following standard sample unit size...
Yan Guo; Shilin Zhao; Chung-I Li; Quanhu Sheng; Yu Shyr
Sample size and power determination is the first step in the experimental design of a successful study. Sample size and power calculation is required for applications for National Institutes of Health (NIH) funding. Sample size and power calculation is well established for traditional biological studies such as mouse model, genome wide association study (GWAS), and microarray studies. Recent developments in high-throughput sequencing technology have allowed RNAseq to replace microarray as the...
Silvestrov, Dmitrii S.; Teugels, Jozef L.
This paper is devoted to the investigation of limit theorems for extremes with random sample size under general dependence-independence conditions for samples and random sample size indexes. Limit theorems of weak convergence type are obtained as well as functional limit theorems for extremal processes with random sample size indexes.
Full Text Available Abstract Background Microarray experiments are often performed with a small number of biological replicates, resulting in low statistical power for detecting differentially expressed genes and concomitant high false positive rates. While increasing sample size can increase statistical power and decrease error rates, with too many samples, valuable resources are not used efficiently. The issue of how many replicates are required in a typical experimental system needs to be addressed. Of particular interest is the difference in required sample sizes for similar experiments in inbred vs. outbred populations (e.g. mouse and rat vs. human. Results We hypothesize that if all other factors (assay protocol, microarray platform, data pre-processing were equal, fewer individuals would be needed for the same statistical power using inbred animals as opposed to unrelated human subjects, as genetic effects on gene expression will be removed in the inbred populations. We apply the same normalization algorithm and estimate the variance of gene expression for a variety of cDNA data sets (humans, inbred mice and rats comparing two conditions. Using one sample, paired sample or two independent sample t-tests, we calculate the sample sizes required to detect a 1.5-, 2-, and 4-fold changes in expression level as a function of false positive rate, power and percentage of genes that have a standard deviation below a given percentile. Conclusions Factors that affect power and sample size calculations include variability of the population, the desired detectable differences, the power to detect the differences, and an acceptable error rate. In addition, experimental design, technical variability and data pre-processing play a role in the power of the statistical tests in microarrays. We show that the number of samples required for detecting a 2-fold change with 90% probability and a p-value of 0.01 in humans is much larger than the number of samples commonly used in
Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization
Hogue, Mark; Thompson, Martha; Farfan, Eduardo; Hadlock, Dennis
Workplace air monitoring programs for sampling radioactive aerosols in nuclear facilities sometimes must rely on sampling systems to move the air to a sample filter in a safe and convenient location. These systems may consist of probes, straight tubing, bends, contractions and other components. Evaluation of these systems for potential loss of radioactive aerosols is important because significant losses can occur. However, it can be very difficult to find fully described equations to model a system manually for a single particle size and even more difficult to evaluate total system efficiency for a polydispersed particle distribution. Some software methods are available, but they may not be directly applicable to the components being evaluated and they may not be completely documented or validated per current software quality assurance requirements. This paper offers a method to model radioactive aerosol transport in sampling systems that is transparent and easily updated with the most applicable models. Calculations are shown with the R Programming Language, but the method is adaptable to other scripting languages. The method has the advantage of transparency and easy verifiability. This paper shows how a set of equations from published aerosol science models may be applied to aspiration and transport efficiency of aerosols in common air sampling system components. An example application using R calculation scripts is demonstrated. The R scripts are provided as electronic attachments. PMID:24667389
Galeone, A; Pollastri, A
The decision about the sample size is particularly difficult when we have to estimate the ratio of two means. This abstract presents a procedure for the sample size determination in the considered situation
Fritz, Catherine O; Morris, Peter E; Richler, Jennifer J
The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis. PMID:21823805
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz;
that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological......In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires that the...... models presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small...
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the... population; and (b) Sample size shall be determined using one of the following options: (1) Option...
L. M. Sliapniova
Full Text Available One of the problems facing chemists who are involved in obtaining disperse systems with micro- and nanoscale particles of the disperse phase is a size evaluation of the obtained particles. Formation of hydrated sol is one of the stages for obtaining nanopowders while using sol-gel-method. We have obtained titanium dioxide hydrosol while using titanium tetrachloride hydrolysis in the presence of organic solvent with the purpose to get titanium dioxide powder It has been necessary to evaluate size of titanium dioxide hydrosol particles because particle dimensions of disperse hydrosol phase are directly interrelated with the obtained powder dispersiveness.Size calculation of titanium dioxide hydrosol particles of disperse phase has been executed in accordance with the Rayleigh equation and it has been shown that calculation results correspond to experimental data of atomic force microscopy and X-ray crystal analysis of the powder obtained from hydrosol.In order to calculate particle size in the disperse system it is possible to use the Rayleigh equation if the particle size is not more than 1/10 of wave length of impinging light or the Heller equation for the system including particles with diameter less than wave length of the impinging light but which is more than 1/10 of its value. Titaniun dioxide hydrosol has been obtained and an index of the wave ration has been calculated in the Heller equation. The obtained value has testified about high dispersiveness of the system and possibility to use the Rayleigh equation for calculation of the particle size in the disperse phase. Calculation of disperse-phase particle size of titanium dioxide hydrosol has corresponded to experimental data of the atomic force microscopy and X-ray crystal analysis for the powder obtained from the system.
Slavin, Robert; Smith, Dewi
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Estimates of prevalence or incidence of infection with a pathogen endemic in a fish population can be valuable information for development and evaluation of aquatic animal health management strategies. However, hundreds of unbiased samples may be required in order to accurately estimate these parame...
Zhang, Lanju; Cui, Lu; Yang, Bo
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385
Day, S. J.; Graham, D F
Methods for determining sample size and power when comparing two groups in clinical trials are widely available. Studies comparing three or more treatments are not uncommon but are more difficult to analyse. A linear nomogram was devised to help calculate the sample size required when comparing up to five parallel groups. It may also be used retrospectively to determine the power of a study of given sample size. In two worked examples the nomogram was efficient. Although the nomogram offers o...
J.L. Geluk (Jaap); L.F.M. de Haan (Laurens)
textabstractIt has been known for a long time that for bootstrapping the probability distribution of the maximum of a sample consistently, the bootstrap sample size needs to be of smaller order than the original sample size. See Jun Shao and Dongsheng Tu (1995), Ex. 3.9,p. 123. We show that the same
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample size for grade determination. 51.2341 Section..., AND STANDARDS) United States Standards for Grades of Kiwifruit § 51.2341 Sample size for grade determination. For fruit place-packed in tray pack containers, the sample shall consist of the contents of...
Highlights: • A method is suggested for improving efficiency of MC criticality calculations. • The method optimises the number of neutrons simulated per cycle. • The optimal number of neutrons per cycle depends on allocated computing time. - Abstract: We present a methodology that improves the efficiency of conventional power iteration based Monte Carlo criticality calculations by optimising the number of neutron histories simulated per criticality cycle (the so-called neutron batch size). The chosen neutron batch size affects both the rate of convergence (in computing time) and magnitude of bias in the fission source. Setting a small neutron batch size ensures a rapid simulation of criticality cycles, allowing the fission source to converge fast to its stationary state; however, at the same time, the small neutron batch size introduces a large systematic bias in the fission source. It follows that for a given allocated computing time, there is an optimal neutron batch size that balances these two effects. We approach this problem by studying the error in the cumulative fission source, i.e. the fission source combined over all simulated cycles, as all results are commonly combined over the simulated cycles. We have deduced a simplified formula for the error in the cumulative fission source, taking into account the neutron batch size, the dominance ratio of the system, the error in the initial fission source and the allocated computing time (in the form of the total number of simulated neutron histories). Knowing how the neutron batch size affects the error in the cumulative fission source allows us to find its optimal value. We demonstrate the benefits of the method on a number of numerical test calculations
Jovani, Roger; Tella, José Luis
Parasite prevalence (the proportion of infected hosts) is a common measure used to describe parasitaemias and to unravel ecological and evolutionary factors that influence host–parasite relationships. Prevalence esti- mates are often based on small sample sizes because of either low abundance of the hosts or logistical problems associated with their capture or laboratory analysis. Because the accuracy of prevalence estimates is lower with small sample sizes, addressing sample size h...
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
Hallum, C. R.; Perry, C. R., Jr.
This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.
The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities. PMID:19696082
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample size and procedure for collecting a sample. 761.286 Section 761.286 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...) § 761.286 Sample size and procedure for collecting a sample. At each selected sampling location for...
Knofczynski, Gregory T.; Mundfrom, Daniel
When using multiple regression for prediction purposes, the issue of minimum required sample size often needs to be addressed. Using a Monte Carlo simulation, models with varying numbers of independent variables were examined and minimum sample sizes were determined for multiple scenarios at each number of independent variables. The scenarios…
Mundfrom, Daniel J.; Shaw, Dale G.; Ke, Tian Lu
There is no shortage of recommendations regarding the appropriate sample size to use when conducting a factor analysis. Suggested minimums for sample size include from 3 to 20 times the number of variables and absolute ranges from 100 to over 1,000. For the most part, there is little empirical evidence to support these recommendations. This…
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample unit size. 52.775 Section 52.775 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... United States Standards for Grades of Canned Red Tart Pitted Cherries 1 Sample Unit Size § 52.775...
Full Text Available In the paper notch toughness assessment of full scale testing samples (FS form the upper bound toughness of sub-sized (SS samples of structural carbon-manganese steels. The relations proposed by Schindler (2000 are in good agreement with experimental data. Empirical proportionality constant q* = 0,54 between notch toughness of full scale and sub-sized samples of studied structural steels agrees well with theoretically estimated constant q* = 0,50–0,54. More precise knowledge of the size effect of testing samples on temperature dependence of notch toughness requires an analysis of scatter in experimental data.
This article introduces basic principle of differentiated particle size sampling technology of aerosols. This sampling technology is used to conduct a experimental research on the aerosols particles size distribution of uranium and radon and it's daughters. Experimental results showed that the part of radon and it's daughters aerosols particles size smaller than 0.43 μm reached 76.4%. The part of radon and it's daughters aerosols particles size less than 1 μm reached 96.3%. The part of uranium aerosol particles size larger than 4.7 μm under specific conditions is 94%, the part of aerosol particles size larger than 10 μm is 72%. According to the experiment's result, we designed a new sampling equipments that cutting size is 1 μm to collect samples of aerosols, and it is used in the separation efficiency experiments of 241Am aerosols. Experimental results showed that the separation efficiency of 241Am aerosols can reach 94.2%. Thus, using the differentiated particle size sampling technology to collect samples of plutonium aerosols, in the sampling process can reduce the effect of natural background aerosols. (authors)
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the...... concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...
Cao, Jing; Lee, J. Jack; Alber, Susan
A challenge for implementing performance based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the des...
Luh, Wei-Ming; Guo, Jiin-Huarng
The sample size determination is an important issue for planning research. However, limitations in size have seldom been discussed in the literature. Thus, how to allocate participants into different treatment groups to achieve the desired power is a practical issue that still needs to be addressed when one group size is fixed. The authors focused…
Clark, Philip M.
Describes three methods of sample size determination, each having its use in investigation of social science problems: Attribute method; Continuous Variable method; Galtung's Cell Size method. Statistical generalization, benefits of cell size method (ease of use, trivariate analysis and trichotyomized variables), and choice of method are…
As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
This paper deals with the optimal sample sizes for a multicentre trial in which the cost-effectiveness of two treatments in terms of net monetary benefit is studied. A bivariate random-effects model, with the treatment-by-centre interaction effect being random and the main effect of centres fixed or random, is assumed to describe both costs and effects. The optimal sample sizes concern the number of centres and the number of individuals per centre in each of the treatment conditions. These numbers maximize the efficiency or power for given research costs or minimize the research costs at a desired level of efficiency or power. Information on model parameters and sampling costs are required to calculate these optimal sample sizes. In case of limited information on relevant model parameters, sample size formulas are derived for so-called maximin sample sizes which guarantee a power level at the lowest study costs. Four different maximin sample sizes are derived based on the signs of the lower bounds of two model parameters, with one case being worst compared to others. We numerically evaluate the efficiency of the worst case instead of using others. Finally, an expression is derived for calculating optimal and maximin sample sizes that yield sufficient power to test the cost-effectiveness of two treatments. PMID:25656551
Yusuf Ali Al-Hroot
This paper aims to clarify the influence of changing both the sample size and selection of financial ratios in bankruptcy models accuracy of companies listed in the industrial sector of Jordan. The study sample is divided into three sub-samples counting 6, 10 and 14 companies respectively; each sample is composed of bankrupt companies and the solvent ones during the period from 2000 to 2013. Financial ratios were calculated and categorized into two groups. The first group includes: liquidity,...
McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)
The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.
Freudenthal, M.; Martín Suárez, E.
The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.
Jia, Bin; Lynn, Henry S
Background The CONSORT statement requires clinical trials to report confidence intervals, which help to assess the precision and clinical importance of the treatment effect. Conventional sample size calculations for clinical trials, however, only consider issues of statistical significance (that is, significance level and power). Method A more consistent approach is proposed whereby sample size planning also incorporates information on clinical significance as indicated by the boundaries of t...
André Carlos Silva
Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.
Stephens, K. S.; Dodge, H. F.
A further generalization of the family of 'two-stage' chain sampling inspection plans is developed - viz, the use of different sample sizes in the two stages. Evaluation of the operating characteristics is accomplished by the Markov chain approach of the earlier work, modified to account for the different sample sizes. Markov chains for a number of plans are illustrated and several algebraic solutions are developed. Since these plans involve a variable amount of sampling, an evaluation of the average sampling number (ASN) is developed. A number of OC curves and ASN curves are presented. Some comparisons with plans having only one sample size are presented and indicate that improved discrimination is achieved by the two-sample-size plans.
Porter, J. N.; Clarke, A. D.; Ferry, G.; Pueschel, R. F.
Representative measurement of aerosol from aircraft-aspirated systems requires special efforts in order to maintain near isokinetic sampling conditions, estimate aerosol losses in the sample system, and obtain a measurement of sufficient duration to be statistically significant for all sizes of interest. This last point is especially critical for aircraft measurements which typically require fast response times while sampling in clean remote regions. This paper presents size-resolved tests, intercomparisons, and analysis of aerosol inlet performance as determined by a custom laser optical particle counter. Measurements discussed here took place during the Global Backscatter Experiment (1988-1989) and the Central Pacific Atmospheric Chemistry Experiment (1988). System configurations are discussed including (1) nozzle design and performance, (2) system transmission efficiency, (3) nonadiabatic effects in the sample line and its effect on the sample-line relative humidity, and (4) the use and calibration of a virtual impactor.
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
Pfeiffer, Caitlin N; Firestone, Simon M; Campbell, Angus J D; Larsen, John W A; Stevenson, Mark A
The movement of animals between farms contributes to infectious disease spread in production animal populations, and is increasingly investigated with social network analysis methods. Tangible outcomes of this work include the identification of high-risk premises for targeting surveillance or control programs. However, knowledge of the effect of sampling or incomplete network enumeration on these studies is limited. In this study, a simulation algorithm is presented that provides an estimate of required sampling proportions based on predicted network size, density and degree value distribution. The algorithm may be applied a priori to ensure network analyses based on sampled or incomplete data provide population estimates of known precision. Results demonstrate that, for network degree measures, sample size requirements vary with sampling method. The repeatability of the algorithm output under constant network and sampling criteria was found to be consistent for networks with at least 1000 nodes (in this case, farms). Where simulated networks can be constructed to closely mimic the true network in a target population, this algorithm provides a straightforward approach to determining sample size under a given sampling procedure for a network measure of interest. It can be used to tailor study designs of known precision, for investigating specific livestock movement networks and their impact on disease dissemination within populations. PMID:26276397
... 600.208-77 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.208-77 Sample...
Muirhead, Robb J
This paper explores an approach to Bayesian sample size determination in clinical trials. The approach falls into the category of what is often called "proper Bayesian", in that it does not mix frequentist concepts with Bayesian ones. A criterion for a "successful trial" is defined in terms of a posterior probability, its probability is assessed using the marginal distribution of the data, and this probability forms the basis for choosing sample sizes. We illustrate with a standard problem in clinical trials, that of establishing superiority of a new drug over a control.
The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.
A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO2-powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)
Duncanson, L.; Rourke, O.; Dubayah, R.
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation.
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…
Socha, Alan; DeMars, Christine E.
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor Pcel in high-energy photon and electron beams. Current dosimetry protocols base the value of Pcel on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on Pcel, much lower than those previously published. The current values of Pcel compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol
The Surprise is a measure for consistency between posterior distributions and operates in parameter space. It can be used to analyze either the compatibility of separately analyzed posteriors from two datasets, or the posteriors from a Bayesian update. The Surprise Calculator estimates relative entropy and Surprise between two samples, assuming they are Gaussian. The software requires the R package CompQuadForm to estimate the significance of the Surprise, and rpy2 to interface R with Python.
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Blum, P. (Inventor)
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
Bianco, D; Andreozzi, F; Lo Iudice, N; Porrino, A [Universita di Napoli Federico II, Dipartimento Scienze Fisiche, Monte S. Angelo, via Cintia, 80126 Napoli (Italy); S, Dimitrova, E-mail: email@example.com [Institute of Nuclear Research and Nuclear Energy, Sofia (Bulgaria)
An importance sampling iterative algorithm, developed few years ago, for generating exact eigensolutions of large matrices is upgraded so as to allow large scale shell model calculations in the uncoupled m-scheme. By exploiting the sparsity properties of the Hamiltonian matrix and projecting out effectively the good angular momentum, the new importance sampling allows to reduce drastically the sizes of the matrices while keeping full control of the accuracy of the eigensolutions. Illustrative numerical examples are presented.
Highlights: • Improvement the neutronic predictions through reactivity worth calculations. • MCNPx code with the nuclear data library ENDF/B-VII has been used for calculations. • The results show a good agreement within a relative error less than ±8.2%. - Abstract: Improving neutronic prediction is a very important step in designing advanced reactors and reactor fuel. There are three main critical reactor facilities of the CEA Cadarache. These reactor facilities are EOLE, MINERVE and MASURCA. The MINERVE reactor is used within the frame work of what is known as OSMOSE project. The OSMOSE program aims at improving neutronic predictions of advanced nuclear fuels through measurements in the MINERVE reactor on samples containing separated actinides. In the present work, the reactivity worth of OSMOSE samples have been calculated using the most recent Monte Carlo N Transport code MCNPX using the most recent nuclear cross-section data library ENDF/B-VII. The calculations are applied to the three core configurations R1-UO2, R2-UO2 and R1-MOX. The present work is performed to improve and/or measure the degree of validity of the previous obtained results of the REBUS and DRAGON codes. Also, our study is extended to include the experimental results for the sake of comparison. The comparison between the previously calculated values using DRAGON and the present results for the effective multiplication factor keff has a deviation less than ±0.3% for the three core configurations. Furthermore, the results of the reactivity worth of the present work show a good agreement with the experimental results within a relative error less than ±8.2%
Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan
biochemical markers in clinical cardioprotection trials and how scan day affect sample size. METHODS AND RESULTS: Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB......) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using...... MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid...
João Ricardo Vasconcellos Gama
Full Text Available ABSTRACT: The aim of this study was to determine the optimum plot size as well as the appropriate sample size in order to provide an accurate sampling of natural regeneration surveys in high floodplain forests, low floodplain forests and in floodplain forests without stratification in the Amazonian estuary. Data were obtained at Exportadora de Madeira do Pará Ltda. – EMAPA forestlands, located in Afuá County, State of Pará. Based on the results, the following plot sizes were recommended: 70m2 - SC1 (0,3m ≤ h < 1,5m, 80m2 - SC2 (h ≥ 1,50m to DAP < 5,0cm, 90m2 - SC3 (5,0cm ≤ DAP < 15,0 cm and 70m2 – ASP (h ≥ 0,3m to DAP < 15,0cm. Considering these optimumplot sizes, it is possible to obtain a representative sampling of the floristic composition when using 19sub-plots in high floodplain, 14 sub-plots in low floodplain, and 19 sub-plots in the forest without stratification to survey the species of SC1 and the species of all sampled population (ASP, while 39 sub-plots are needed for sampling the natural regeneration species in SC2 and SC3.
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately
Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
Full Text Available Objective: Choosing the most efficient statistical test is one of the essential problems of statistics. Asymptotic relative efficiency is a notion which enables to implement in large samples the quantitative comparison of two different tests used for testing of the same statistical hypothesis. The notion of the asymptotic efficiency of tests is more complicated than that of asymptotic efficiency of estimates. This paper discusses the effect of sample size on expected values and variances of non-parametric tests for independent two samples and determines the most effective test for different sample sizes using Fraser efficiency value. Material and Methods: Since calculating the power value in comparison of the tests is not practical most of the time, using the asymptotic relative efficiency value is favorable. Asymptotic relative efficiency is an indispensable technique for comparing and ordering statistical test in large samples. It is especially useful in nonparametric statistics where there exist numerous heuristic tests such as the linear rank tests. In this study, the sample size is determined as 2 ≤ n ≤ 50. Results: In both balanced and unbalanced cases, it is found that, as the sample size increases expected values and variances of all the tests discussed in this paper increase as well. Additionally, considering the Fraser efficiency, Mann-Whitney U test is found as the most efficient test among the non-parametric tests that are used in comparison of independent two samples regardless of their sizes. Conclusion: According to Fraser efficiency, Mann-Whitney U test is found as the most efficient test.
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
King, A.M.; Jones, A.M. [IMC Technical Services Ltd., Burton-on-Trent (United Kingdom); Dorling, S.R. [University of East Angila (United Kingdom); Merefield, J.R.; Stone, I.M. [Exeter Univ. (United Kingdom); Hall, K.; Garner, G.V.; Hall, P.A. [Hall Analytical Labs., Ltd. (United Kingdom); Stokes, B. [CRE Group Ltd. (United Kingdom)
This report summarises the findings of a study investigating the origin of particulate matter by analysis of the size distribution and composition of particulates in rural, semi-rural and urban areas of the UK. Details are given of the sampling locations; the sampling; monitoring, and inorganic and organic analyses; the review of archive material. The analysis carried out at St Margaret's/Stoke Ferry, comparisons of data with other locations, and the composition of ambient airborne matter are discussed, and recommendations are given. Results of PM2.5/PM10 samples collected at St Margaret's and Stoke Ferry in 1998, and back trajectories for five sites are considered in appendices.
Nyamundanda, Gift; Gormley, Isobel Claire; Fan, Yue; Gallagher, William M; Brennan, Lorraine
Background: Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. Results: In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experime...
Budaev, Dr. Sergey V.
This manuscript is the author's response to: "Dochtermann, N.A. & Jenkins, S.H. Multivariate methods and small sample sizes, Ethology, 117, 95-101." and accompanies this paper: "Budaev, S. Using principal components and factor analysis in animal behaviour research: Caveats and guidelines. Ethology, 116, 472-480"
Luh, Wei-Ming; Guo, Jiin-Huarng
This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…
Lawson, Chris A.; Fisher, Anna V.
Developmental studies have provided mixed evidence with regard to the question of whether children consider sample size and sample diversity in their inductive generalizations. Results from four experiments with 105 undergraduates, 105 school-age children (M = 7.2 years), and 105 preschoolers (M = 4.9 years) showed that preschoolers made a higher…
Análise do emprego do cálculo amostral e do erro do método em pesquisas científicas publicadas na literatura ortodôntica nacional e internacional Analysis of the use of sample size calculation and error of method in researches published in Brazilian and international orthodontic journals
Full Text Available INTRODUÇÃO: o dimensionamento adequado da amostra estudada e a análise apropriada do erro do método são passos importantes na validação dos dados obtidos em determinado estudo científico, além das questões éticas e econômicas. OBJETIVO: esta investigação tem o objetivo de avaliar, quantitativamente, com que frequência os pesquisadores da ciência ortodôntica têm empregado o cálculo amostral e a análise do erro do método em pesquisas publicadas no Brasil e nos Estados Unidos. MÉTODOS: dois importantes periódicos, de acordo com a Capes (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, foram analisados, a Revista Dental Press de Ortodontia e Ortopedia Facial (Dental Press e o American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO. Apenas artigos publicados entre os anos de 2005 e 2008 foram analisados. RESULTADOS: a maioria das pesquisas publicadas em ambas as revistas emprega alguma forma de análise do erro do método, quando essa metodologia pode ser aplicada. Porém, apenas um número muito pequeno dos artigos publicados nesses periódicos apresenta qualquer descrição de como foram dimensionadas as amostras estudadas. Essa proporção, já pequena (21,1% na revista editada nos Estados Unidos (AJO-DO, é significativamente menor (p=0,008 na revista editada no Brasil (Dental Press (3,9%. CONCLUSÃO: os pesquisadores e o corpo editorial, de ambas as revistas, deveriam dedicar uma maior atenção ao exame dos erros inerentes à ausência de tais análises na pesquisa científica, em especial aos erros inerentes a um dimensionamento inadequado das amostras.INTRODUCTION: Reliable sample size and an appropriate analysis of error are important steps to validate the data obtained in a scientific study, in addition to the ethical and economic issues. OBJECTIVE: To evaluate, quantitatively, how often the researchers of orthodontic science have used the calculation of sample size and evaluated the
Bourdakis, Eleftherios; Kazanci, Ongun Berk; Olesen, Bjarne W.
The aim of this study was, by using a building simulation software, to prove that a radiant cooling system should not be sized based on the maximum cooling load but at a lower value. For that reason six radiant cooling models were simulated with two control principles using 100%, 70% and 50% of t...
HE Gui-chun; NI Wen
Based on various ultrasonic loss mechanisms, the formula of the cumulative mass percentage of minerals with different particle sizes was given, with which the particle size distribution was integrated into an ultrasonic attenuation model. And then the correlations between the ultrasonic attenuation and the pulp density, and the particle size were obtained. The derived model was combined with the experiment and the analysis of experimental data to determine the inverse model relating ultrasonic attenuation coefficient with size distribution. Finally, an optimization method of inverse parameter, genetic algorithm was applied for particle size distribution. The results of inverse calculation show that the precision of measurement was high.
Alberto Cargnelutti Filho
Full Text Available ABSTRACT: In eucalyptus crops, it is important to determine the number of plants that need to be evaluated for a reliable inference of growth. The aim of this study was to determine the sample size needed to estimate average trunk diameter at breast height and plant height of inter-specific eucalyptus hybrids. In 6,694 plants of twelve inter-specific hybrids it was evaluated trunk diameter at breast height at three (DBH3 and seven years (DBH7 and tree height at seven years (H7 of age. The statistics: minimum, maximum, mean, variance, standard deviation, standard error, and coefficient of variation were calculated. The hypothesis of variance homogeneity was tested. The sample size was determined by re sampling with replacement of 10,000 re samples. There was an increase in the sample size from DBH3 to H7 and DBH7. A sample size of 16, 59 and 31 plants is adequate to estimate DBH3, DBH7 and H7 means, respectively, of inter-specific hybrids of eucalyptus, with amplitude of confidence interval of 95% equal to 20% of the estimated mean.
Hu, Fan; Schucany, William R.; Ahn, Chul
We propose a sample size calculation approach for the estimation of sensitivity and specificity of diagnostic tests with multiple observations per subjects. Many diagnostic tests such as diagnostic imaging or periodontal tests are characterized by the presence of multiple observations for each subject. The number of observations frequently varies among subjects in diagnostic imaging experiments or periodontal studies. Nonparametric statistical methods for the analysis of clustered binary data...
Allen, Carlton C.
The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by
Zhang, Xin; Zuckerman, Daniel M
To quantify the progress in development of algorithms and forcefields used in molecular simulations, a method for the assessment of the sampling quality is needed. We propose a general method to assess the sampling quality through the estimation of the number of independent samples obtained from molecular simulations. This method is applicable to both dynamic and nondynamic methods and utilizes the variance in the populations of physical states to determine the ESS. We test the correctness and robustness of our procedure in a variety of systems--two-state toy model, all-atom butane, coarse-grained calmodulin, all-atom dileucine and Met-enkaphalin. We also introduce an automated procedure to obtain approximate physical states from dynamic trajectories: this procedure allows for sample--size estimation for systems for which physical states are not known in advance.
... or Jumbo size or larger the package shall be the sample. When individual packages contain less than... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.2838... Creole Types) Samples for Grade and Size Determination § 51.2838 Samples for grade and size...
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.
This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.
Herrmann, Nira; Szatrowski, Ted H.
Brown, Cohen and Strawderman propose curtailed procedures for the $t$-test and Hotelling's $T^2$. In this paper we present the exact forms of these procedures and examine the expected sample size savings under the null hypothesis. The sample size savings can be bounded by a constant which is independent of the sample size. Tables are given for the expected sample size savings and maximum sample size saving under the null hypothesis for a range of significance levels $(\\alpha)$, dimensions $(p...
Graphical abstract: Display Omitted Highlights: ► Dimensional changes in irradiated anisotropic polycrystalline GR-280 graphite. ► We propose the model of anisotropic domains changing their shape under irradiation. ► Disorientation of domain structure explains observed dimensional changes. ► Macro-graphite deformation is related to shape-changes in finite size samples. - Abstract: Dimensional changes in irradiated anisotropic polycrystalline GR-280 graphite samples as measured in the parallel and perpendicular directions of extrusion revealed a mismatch between volume changes measured directly and those calculated using the generally accepted methodology based on length change measurements only. To explain this observation a model is proposed based on polycrystalline substructural elements – domains. Those domains are anisotropic, have different amplitudes of shape-changes with respect to the sample as a whole and are randomly orientated relative to the sample axes of symmetry. This domain model can explain the mismatch observed in experimental data. It is shown that the disoriented domain structure leads to the development of irradiation-induced stresses and to the dependence of the dimensional changes on the sizes of graphite samples chosen for the irradiation experiment. The authors derive the relationship between shape-changes in the finite size samples and the actual shape-changes observable on the macro-scale in irradiated graphite.
Hitt, Nathaniel P.; Smith, David
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4-8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and type-I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of 8 fish could detect an increase of ∼ 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of ∼ 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2 this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of ∼ 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated by increased precision of composites for estimating mean
Singh, Bachu Narain; Foreman, A. J. E.
In order to study the effect of grain size on void formation during high-energy electron irradiations, the steady-state point defect concentration and vacancy supersaturation profiles have been calculated for three-dimensional spherical grains up to three microns in size. In the calculations of...... vacancy supersaturation as a function of grain size, the effects of internal sink density and the dislocation preference for interstitial attraction have been included. The computations show that the level of vacancy supersaturation achieved in a grain decreases with decreasing grain size. The grain size...
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits
... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.3200... Grade and Size Determination § 51.3200 Samples for grade and size determination. Individual samples.... When individual packages contain 20 pounds or more and the onions are packed for Large or Jumbo size...
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Standard wipe sample method and size... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural...
Full Text Available Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA’s such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.
Tong, Tiejun; Zhao, Hongyu
One major goal in microarray studies is to identify genes having different expression levels across different classes/conditions. In order to achieve this goal, a study needs to have an adequate sample size to ensure the desired power. Due to the importance of this topic, a number of approaches to sample size calculation have been developed. However, due to the cost and/or experimental difficulties in obtaining sufficient biological materials, it might be difficult to attain the required samp...
Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten;
/CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study of...... metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.690 Section..., California, and Arizona) Sample for Grade Or Size Determination § 51.690 Sample for grade or size determination. Each sample shall consist of 50 oranges. When individual packages contain at least 50...
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.1406..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans....
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.629 Section..., California, and Arizona) Sample for Grade Or Size Determination § 51.629 Sample for grade or size determination. Each sample shall consist of 33 grapefruit. When individual packages contain at least...
... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.1548..., AND STANDARDS) United States Standards for Grades of Potatoes 1 Samples for Grade and Size Determination § 51.1548 Samples for grade and size determination. Individual samples shall consist of at...
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
A method is proposed for assessing the size distribution of the radioactive particles directly from the alpha spectrum of a dust sample. The residual range distribution of alpha particles emerging from a sphere containing a monoenergetic alpha emitter is simply a quadratic function of the diameter of the sphere. The residual range distribution from a typical dust particle closely approximates that of a sphere of the same mass. For mixtures of various size particles of similar density the (multiparticle) residual range distribution can thus readily be calculated for each of the alpha emitters contained in the particles. Measurement of the composite residual range distribution can be made in a vacuum alpha spectrometer provided the dust sample has no more than a monolayer of particles. The measured energy distribution is particularly sensitive to upper particle size distributions in the diameter region of 4μm to 20μm of 5 mg/cm3 density particles, i.e. 2 to 10 mg/ch2. For dust particles containing212Po or known ratios of alpha emitters a measured alpha spectrum can be unraveled to the underlying particle size distribution. Uncertainty in the size distribution has been listed as deserving research priority in the overall radiation protection program of the mineral sands industry. The proposed method had the potential of reducing this uncertainty, thus permitting more effective radiation protection control. 2 refs., 1 tabs., 1 figs
Arthur, Steve M.; Schwartz, Charles C.
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy
Coles, Henry C.; Qin, Yong; Price, Phillip N.
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable...
... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the...
Valsson, Omar; Parrinello, Michele
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Beaujean, A. Alexander
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
White, E.; Roy, D. P.
In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.
Hong Tang; Xiaogang Sun; Guibin Yuan
In total light scattering particle sizing technique, the relationship among Sauter mean diameter D32, mean extinction efficiency Q, and particle size distribution function is studied in order to inverse the mean diameter and particle size distribution simply. We propose a method which utilizes the mean extinction efficiency ratio at only two selected wavelengths to solve D32 and then to inverse the particle size distribution associated with (Q) and D32. Numerical simulation results show that the particle size distribution is inversed accurately with this method, and the number of wavelengths used is reduced to the greatest extent in the measurement range. The calculation method has the advantages of simplicity and rapidness.
Hui, D.; Luo, Y.; Jackson, R. B.
A power function, Y=Y0 Mbeta, can be used to describe the relationship of physiological variables with body size over a wide range of scales, typically many orders of magnitude. One of the key issues in the renewed power law debate is whether the allometric scaling exponent β equals 3/4 or 2/3. The analysis could be remarkably affected by sampling size, measurement error, and analysis methods, but these effects have not been explored systematically. We investigated the influences of these three factors based on a data set of 626 pairs of base metabolic rate and mass in mammals with the calculated β=0.711. Influence of sampling error was tested by re-sampling with different sample sizes using a Monte Carlo approach. Results showed that estimated parameter b varied considerably from sample to sample. For example, when the sample size was n=63, b varied from 0.582 to 0.776. Even though the original data set did not support either β=3/4 or β=2/3, we found that 39.0% of the samples supported β=2/3, 35.4% of the samples supported β=3/4. Influence of measurement error on parameter estimations was also tested using Bayesian theory. Virtual data sets were created using the mass in the above-mentioned data set, with given parameters α and β (β=2/3 or β=3/4) and certain measurement error in base metabolic rate and/or mass. Results showed that as measurement error increased, more estimated bs were found to be significantly different from the parameter β. When measurement error (i.e., standard deviation) was 20% and 40% of the measured mass and base metabolic rate, 15.4% and 14.6% of the virtual data sets were found to be significant different from the parameter β=3/4 and β=2/3, respectively. Influence of different analysis methods on parameter estimations was also demonstrated using the original data set and the pros and cons of these methods were further discussed. We urged cautions in interpreting the power law analysis, especially from a small data sample, and
R. A. KUHNLE; D. G. WREN; J. P. CHAMBERS
Collection of samples of suspended sediment transported by streams and rivers is difficult and expensive. Emerging technologies, such as acoustic backscatter, have promise to decrease costs and allow more thorough sampling of transported sediment in streams and rivers. Acoustic backscatter information may be used to calculate the concentration of suspended sand-sized sediment given the vertical distribution of sediment size. Therefore, procedures to accurately compute suspended sediment size distributions from easily obtained river data are badly needed. In this study, techniques to predict the size of suspended sand are examined and their application to measuring concentrations using acoustic backscatter data are explored. Three methods to predict the size of sediment in suspension using bed sediment, flow criteria, and a modified form of the Rouse equation yielded mean suspended sediment sizes that differed from means of measured data by 7 to 50 percent. When one sample near the bed was used as a reference, mean error was reduced to about 5 percent. These errors in size determination translate into errors of 7 to 156 percent in the prediction of sediment concentration using backscatter data from 1 MHz single frequency acoustics.
Liu, Xinzhu; Kang, Zhizhong
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
The baseline DARHT-II converter target consists of foamed tantalum within a solid-density cylindrical tamper. The baseline design has been modified by D. Ho to further optimize the integrated line density of material in the course of multiple beam pulses. LASNEX simulations of the hydrodynamic expansion of the target have been performed by D. Ho (documented elsewhere). The resulting density profiles have been used as inputs in the MCNP radiation transport code to calculate the X-ray dose and spot size assuming a incoming Gaussian electron beam with σ = 0.65mm, and a PIC-generated beam taking into account the ''swept'' spot emerging from the DARHT-II kicker system. A prerequisite to these calculations is the absorption spectrum of air. In order to obtain this, a separate series of MCNP runs was performed for a set of monoenergetic photon sources, tallying the energy deposited in a volume of air. The forced collision feature was used to improve the statistics since the photon mean free path in air is extremely long at the energies of interest. A sample input file is given below. The resulting data for the MCNP DE and DF cards is shown in the beam-pulse input files, one of which is listed below. Note that the DE and DF cards are entered in column format for easy reading
Full Text Available Abstract Background Global mRNA amplification has become a widely used approach to obtain gene expression profiles from limited material. An important concern is the reliable reflection of the starting material in the results obtained. This is especially important with extremely low quantities of input RNA where stochastic effects due to template dilution may be present. This aspect remains under-documented in the literature, as quantitative measures of data reliability are most often lacking. To address this issue, we examined the sensitivity levels of each transcript in 3 different cell sample sizes. ANOVA analysis was used to estimate the overall effects of reduced input RNA in our experimental design. In order to estimate the validity of decreasing sample sizes, we examined the sensitivity levels of each transcript by applying a novel model-based method, TransCount. Results From expression data, TransCount provided estimates of absolute transcript concentrations in each examined sample. The results from TransCount were used to calculate the Pearson correlation coefficient between transcript concentrations for different sample sizes. The correlations were clearly transcript copy number dependent. A critical level was observed where stochastic fluctuations became significant. The analysis allowed us to pinpoint the gene specific number of transcript templates that defined the limit of reliability with respect to number of cells from that particular source. In the sample amplifying from 1000 cells, transcripts expressed with at least 121 transcripts/cell were statistically reliable and for 250 cells, the limit was 1806 transcripts/cell. Above these thresholds, correlation between our data sets was at acceptable values for reliable interpretation. Conclusion These results imply that the reliability of any amplification experiment must be validated empirically to justify that any gene exists in sufficient quantity in the input material. This
Shimobaba, Tomoyoshi; Kakue, Takashi; Oikawa, Minoru; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi
Scalar diffraction calculations such as the angular spectrum method (ASM) and Fresnel diffraction, are widely used in the research fields of optics, X-rays, electron beams, and ultrasonics. It is possible to accelerate the calculation using fast Fourier transform (FFT); unfortunately, acceleration of the calculation of non-uniform sampled planes is limited due to the property of the FFT that imposes uniform sampling. In addition, it gives rise to wasteful sampling data if we calculate a plane...
Bill, Anthony; Henderson, Sally; Penman, John
Two test items that examined high school students' beliefs of sample size for large populations using the context of opinion polls conducted prior to national and state elections were developed. A trial of the two items with 21 male and 33 female Year 9 students examined their naive understanding of sample size: over half of students chose a…
Vemury, S. K.; Stowe, L.; Jacobowitz, H.
Scan channels on the Nimbus 7 Earth Radiation Budget instrument sample radiances from underlying earth scenes at a number of incident and scattering angles. A sampling excess toward measurements at large satellite zenith angles is noted. Also, at large satellite zenith angles, the present scheme for scene selection causes many observations to be classified as cloud, resulting in higher flux averages. Thus the combined effect of sampling bias and scene identification errors is to overestimate the computed albedo. It is shown, using a process of successive thresholding, that observations with satellite zenith angles greater than 50-60 deg lead to incorrect cloud identification. Elimination of these observations has reduced the albedo from 32.2 to 28.8 percent. This reduction is very nearly the same and in the right direction as the discrepancy between the albedoes derived from the scanner and the wide-field-of-view channels.
Zhu, Jianjun; Chen, Hsin-Yi
We examined the utility of inferential norming using small samples drawn from the larger "Wechsler Intelligence Scales for Children-Fourth Edition" (WISC-IV) standardization data set. The quality of the norms was estimated with multiple indexes such as polynomial curve fit, percentage of cases receiving the same score, average absolute score…
... 10 Energy 3 2010-01-01 2010-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle...
In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.
Full Text Available Background. Usually the training set of online brain-computer interface (BCI experiment is small. For the small training set, it lacks enough information to deeply train the classifier, resulting in the poor classification performance during online testing. Methods. In this paper, on the basis of Z-LDA, we further calculate the classification probability of Z-LDA and then use it to select the reliable samples from the testing set to enlarge the training set, aiming to mine the additional information from testing set to adjust the biased classification boundary obtained from the small training set. The proposed approach is an extension of previous Z-LDA and is named enhanced Z-LDA (EZ-LDA. Results. We evaluated the classification performance of LDA, Z-LDA, and EZ-LDA on simulation and real BCI datasets with different sizes of training samples, and classification results showed EZ-LDA achieved the best classification performance. Conclusions. EZ-LDA is promising to deal with the small sample size training problem usually existing in online BCI system.
A calculational study on the irradiation of americium samples in the Petten High Flux Reactor (HFR) has been performed. This has been done in the framework of the international EFTTRA cooperation. For several reasons the americium in the samples is supposed to be diluted with a neutron inert matrix, but the main reason is to limit the power density in the sample. The low americium nuclide density in the sample (10 weight % americium oxide) leads to a low radial dependence of the burnup. Three different calculational methods have been used to calculate the burnup in the americium sample: Two-dimensional calculations with WIMS-6, one-dimensional calculations with WIMS-6, and one-dimensional calculations with SCALE. The results of the different methods agree fairly well. It is concluded that the radiotoxicity of the americium sample can be reduced upon irradiation in our scenario. This is especially the case for the radiotoxicity between 100 and 1000 years after storage. (orig.)
The purpose of this study is to examine the number of DIF items detected by HGLM at different sample sizes. Eight different sized data files have been composed. The population of the study is 798307 students who had taken the 2006 OKS Examination. 10727 students of 798307 are chosen by random sampling method as the sample of the study. Turkish,…
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Morgera, S.; Cooper, D. B.
The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.
The efficiency of a whole-body counter for 137Cs and 40K was calculated using the MCNP5 code. The ORNL phantoms of a human body of different body sizes were applied in a sitting position in front of a detector. The aim was to investigate the dependence of efficiency on the body size (age) and the detector position with respect to the body and to estimate the accuracy of real measurements. The calculation work presented here is related to the NaI detector, which is available in the Serbian Whole-body Counter facility in Vinca Inst.. (authors)
Tilikin, I. N.; Shelkovenko, T. A.; Pikuz, S. A.; Hammer, D. A.
In traditional X-ray radiography, which has been used for various purposes since the discovery of X-ray radiation, the shadow image of an object under study is constructed based on the difference in the absorption of the X-ray radiation by different parts of the object. The main method that ensures a high spatial resolution is the method of point projection X-ray radiography, i.e., radiography from a point and bright radiation source. For projection radiography, the small size of the source is the most important characteristic of the source, which mainly determines the spatial resolution of the method. In this work, as a point source of soft X-ray radiation for radiography with a high spatial and temporal resolution, radiation from a hot spot of X-pinches is used. The size of the radiation source in different setups and configurations can be different. For four different high-current generators, we have calculated the sizes of sources of soft X-ray radiation from X-ray patterns of corresponding objects using Fresnel-Kirchhoff integrals. Our calculations show that the size of the source is in the range 0.7-2.8 μm. The method of the determination of the size of a radiation source from calculations of Fresnel-Kirchhoff integrals makes it possible to determine the size with an accuracy that exceeds the diffraction limit, which frequently restricts the resolution of standard methods.
Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon RB; Marques, Tiago A; Burnham, Kenneth P
1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the
Holzmann, Markus; Morales, Miguel A; Tubmann, Norm M; Ceperley, David M; Pierleoni, Carlo
Concentrating on zero temperature Quantum Monte Carlo calculations of electronic systems, we give a general description of the theory of finite size extrapolations of energies to the thermodynamic limit based on one and two-body correlation functions. We introduce new effective procedures, such as using the potential and wavefunction split-up into long and short range functions to simplify the method and we discuss how to treat backflow wavefunctions. Then we explicitly test the accuracy of our method to correct finite size errors on example hydrogen and helium many-body systems and show that the finite size bias can be drastically reduced for even small systems.
Azadi, Sam, E-mail: firstname.lastname@example.org; Foulkes, W. M. C. [Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)
We present a systematic and comprehensive study of finite-size effects in diffusion quantum Monte Carlo calculations of metals. Several previously introduced schemes for correcting finite-size errors are compared for accuracy and efficiency, and practical improvements are introduced. In particular, we test a simple but efficient method of finite-size correction based on an accurate combination of twist averaging and density functional theory. Our diffusion quantum Monte Carlo results for lithium and aluminum, as examples of metallic systems, demonstrate excellent agreement between all of the approaches considered.
Malhotra Rajeev; Indrayan A
Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size ...
Kobierska, Florian; Engeland, Kolbjorn
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
Full Text Available BACKGROUND: Animal transmission studies can provide important insights into host, viral and environmental factors affecting transmission of viruses including influenza A. The basic unit of analysis in typical animal transmission experiments is the presence or absence of transmission from an infectious animal to a susceptible animal. In studies comparing two groups (e.g. two host genetic variants, two virus strains, or two arrangements of animal cages, differences between groups are evaluated by comparing the proportion of pairs with successful transmission in each group. The present study aimed to discuss the significance and power to estimate transmissibility and identify differences in the transmissibility based on one-to-one trials. The analyses are illustrated on transmission studies of influenza A viruses in the ferret model. METHODOLOGY/PRINCIPAL FINDINGS: Employing the stochastic general epidemic model, the basic reproduction number, R₀, is derived from the final state of an epidemic and is related to the probability of successful transmission during each one-to-one trial. In studies to estimate transmissibility, we show that 3 pairs of infectious/susceptible animals cannot demonstrate a significantly higher transmissibility than R₀= 1, even if infection occurs in all three pairs. In comparisons between two groups, at least 4 pairs of infectious/susceptible animals are required in each group to ensure high power to identify significant differences in transmissibility between the groups. CONCLUSIONS: These results inform the appropriate sample sizes for animal transmission experiments, while relating the observed proportion of infected pairs to R₀, an interpretable epidemiological measure of transmissibility. In addition to the hypothesis testing results, the wide confidence intervals of R₀ with small sample sizes also imply that the objective demonstration of difference or similarity should rest on firmly calculated sample size.
Huang, Qin; Chen, Gang; Yuan, Zhilong; Lan, K K Gordon
Due to the potential impact of ethnic factors on clinical outcomes, the global registration of a new treatment is challenging. China and Japan often require local trials in addition to a multiregional clinical trial (MRCT) to support the efficacy and safety claim of the treatment. The impact of ethnic factors on the treatment effect has been intensively investigated and discussed from different perspectives. However, most current methods are focusing on the assessment of the consistency or similarity of the treatment effect between different ethnic groups in exploratory nature. In this article, we propose a new method for the design and sample size consideration for a simultaneous global drug development program (SGDDP) using weighted z-tests. In the proposed method, to test the efficacy of a new treatment for the targeted ethnic (TE) group, a weighted test that combines the information collected from both the TE group and the nontargeted ethnic (NTE) group is used. The influence of ethnic factors and local medical practice on the treatment effect is accounted for by down-weighting the information collected from NTE group in the combined test statistic. This design controls rigorously the overall false positive rate for the program at a given level. The sample sizes needed for the TE group in an SGDDP for three most commonly used efficacy endpoints, continuous, binary, and time-to-event, are then calculated. PMID:22946950
Tam, Vincent H.; Kabbara, Samer; Yeh, Rosa F.; Leary, Robert H.
Monte Carlo simulations are increasingly used to predict pharmacokinetic variability of antimicrobials in a population. We investigated the sample size necessary to provide robust pharmacokinetic predictions. To obtain reasonably robust predictions, a nonparametric model derived from a sample population size of ≥50 appears to be necessary as the input information.
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.
Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…
de Winter, J. C .F.
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
Liu, Xiaofeng Steven
This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
The problems with the crack length determination by the unloading compliance method are well known for Charpy size specimens. The final crack lengths calculated for bent specimens do not fulfil ASTM 1820 accuracy requirements. Therefore some investigations have been performed to resolve this problem. In those studies it was considered that the measured compliance should be corrected for various factors, but satisfying results were not obtained. In the presented work the problem was attacked from the other side, the measured specimen compliance was taken as a correct value and what had to be adjusted was the calculation procedure. On the basis of experimentally obtained compliances of bent specimens and optically measured crack lengths the investigation was carried out. Finally, a calculation procedure enabling accurate crack length calculation up to 5 mm of plastic deflection was developed. Applying the new procedure, out of investigated 238 measured crack lengths, more than 80% of the values fulfilled the ASTM 1820 accuracy requirements, while presently used procedure provided only about 30% of valid results. The newly proposed procedure can be also prospectively used in modified form for specimens of a size different than Charpy size. (orig.)
Full Text Available Thermomagnetic analysis of magnetic susceptibility k(T was carried out for a number of natural powder materials from soils, baked clay and anthropogenic dust samples using fast (11oC/min and slow (6.5oC/min heating rates available in the furnace of Kappabridge KLY2 (Agico. Based on the additional data for mineralogy, grain size and magnetic properties of the studied samples, behaviour of k(T cycles and the observed differences in the curves for fast and slow heating rate are interpreted in terms of mineralogical transformations and Curie temperatures (Tc. The effect of different sample size is also explored, using large volume and small volume of powder material. It is found that soil samples show enhanced information on mineralogical transformations and appearance of new strongly magnetic phases when using fast heating rate and large sample size. This approach moves the transformation at higher temperature, but enhances the amplitude of the signal of newly created phase. Large sample size gives prevalence of the local micro- environment, created by evolving gases, released during transformations. The example from archeological brick reveals the effect of different sample sizes on the observed Curie temperatures on heating and cooling curves, when the magnetic carrier is substituted magnetite (Mn0.2Fe2.70O4. Large sample size leads to bigger differences in Tcs on heating and cooling, while small sample size results in similar Tcs for both heating rates.
Heo, Yongju; Park, Jiyeon; Lim, Sung-Il; Hur, Hor-Gil; Kim, Daesung; Park, Kihong
Size-resolved bacterial concentrations in atmospheric aerosols sampled by using a six stage viable impactor at rice field, sanitary landfill, and waste incinerator sites were determined. Culture-based and Polymerase Chain Reaction (PCR) methods were used to identify the airborne bacteria. The culturable bacteria concentration in total suspended particles (TSP) was found to be the highest (848 Colony Forming Unit (CFU)/m(3)) at the sanitary landfill sampling site, while the rice field sampling site has the lowest (125 CFU/m(3)). The closed landfill would be the main source of the observed bacteria concentration at the sanitary landfill. The rice field sampling site was fully covered by rice grain with wetted conditions before harvest and had no significant contribution to the airborne bacteria concentration. This might occur because the dry conditions favor suspension of soil particles and this area had limited personnel and vehicle flow. The respirable fraction calculated by particles less than 3.3 mum was highest (26%) at the sanitary landfill sampling site followed by waste incinerator (19%) and rice field (10%), which showed a lower level of respiratory fraction compared to previous literature values. We identified 58 species in 23 genera of culturable bacteria, and the Microbacterium, Staphylococcus, and Micrococcus were the most abundant genera at the sanitary landfill, waste incinerator, and rice field sites, respectively. An antibiotic resistant test for the above bacteria (Micrococcus sp., Microbacterium sp., and Staphylococcus sp.) showed that the Staphylococcus sp. had the strongest resistance to both antibiotics (25.0% resistance for 32 microg ml(-1) of Chloramphenicol and 62.5% resistance for 4 microg ml(-1) of Gentamicin). PMID:20623053
Carlo De Gregorio
This paper analyses the sample sizes needed to estimate Laspeyres consumer price subindices under a combination of alternative sample designs, aggregation methods and temporal targets. In a simplified consumer market, the definition of the statistical target has been founded on the methodological framework adopted for the Harmonized Index of Consumer Prices. For a given precision level, sample size needs have been simulated under simple and stratified random designs with three distinct approa...
A new methodology for trace elemental analysis in plutonium metal samples was developed by interfacing the novel micro-FAST sample introduction system with an ICP-OES instrument. This integrated system, especially when coupled with a low flow rate nebulization technique, reduced the sample volume requirement significantly. Improvements to instrument sensitivity and measurement precision, as well as long term stability, were also achieved by this modified ICP-OES system. The sample size reduction, together with other instrument performance merits, is of great significance, especially to nuclear material analysis. (author)
van Lanen, E. P. A.; van Nugteren, J.; Nijhuis, A.
With the numerical cable model JackPot it is possible to calculate the interstrand coupling losses, generated by a time-changing background and self-field, between all strands in a cable-in-conduit conductor (CICC). For this, the model uses a system of equations in which the mutual inductances between all strand segments are calculated in advance. The model works well for analysing sub-size CICC sections. However, the exponential relationship between the model size and the computation time make it unpractical to simulate full size ITER CICC sections. For this reason, the multi-level fast multipole method (MLFMM) is implemented to control the computation load. For additional efficiency, it is written in a code that runs on graphics processing units, thereby utilizing an efficient low-cost parallel computation technique. A good accuracy is obtained with a considerably fast computation of the mutually induced voltages between all strands. This allows parametric studies on the coupling loss of long lengths of ITER size CICCs with the purpose of optimizing the cable design and to accurately compute the coupling loss for any applied magnetic field scenario.
de la Torre, Jimmy; Hong, Yuan
Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…
Wei Lin Teoh
Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.
Link, W.A.; Nichols, J.D.
Our purpose here is to emphasize the need to properly deal with sampling variance when studying population variability and to present a means of doing so. We present an estimator for temporal variance of population size for the general case in which there are both sampling variances and covariances associated with estimates of population size. We illustrate the estimation approach with a series of population size estimates for black-capped chickadees (Parus atricapillus) wintering in a Connecticut study area and with a series of population size estimates for breeding populations of ducks in southwestern Manitoba.
Klementiev, K.; Chernikov, R.
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
Adams, F.; van Espen, P.; Maenhaut, W.
Thirty-four cascade-impactor samples were collected between September 1977 and November 1978 at Chacaltaya, Bolivia. The concentrations of 25 elements were measured for the six impaction stages of each sample by means of energy-dispersive X-ray fluorescence and proton-induced X-ray emission analysis. The results indicated that most elements are predominantly associated with a unimodal coarse-particle soil-dustdispersion component. Also chlorine and the alkali and alkaline earth elements belong to this group. The anomalously enriched elements (S, Br and the heavy metals Cu, Zn, Ga, As, Se, Pb and Bi) showed a bimodal size distribution. Correlation coefficient calculations and principal component analysis indicated the presence in the submicrometer aerosol mode of an important component, containing S, K, Zn, As and Br, which may originate from biomass burning. For certain enriched elements (i.e. Zn and perhaps Cu) the coarse-particle enrichments observed may be the result of the true crust-air fractionation during soil-dust dispersion.
Nasrabadi, M N; Mohammadi, A; Jalali, M
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required. PMID:19328700
Nasrabadi, M.N. [Department of Nuclear Engineering, Faculty of Modern Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of)], E-mail: email@example.com; Mohammadi, A. [Department of Physics, Payame Noor University (PNU), Kohandej, Isfahan (Iran, Islamic Republic of); Jalali, M. [Isfahan Nuclear Science and Technology Research Institute (NSTRT), Reactor and Accelerators Research and Development School, Atomic Energy Organization of Iran (Iran, Islamic Republic of)
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.
Bice, K.; Clement, S. C.
X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.
Rabbers, J.J.; Haken, ten, Bennie; Kate, ten, F.J.W.
A method to calculate the AC loss of superconducting power devices from the measured AC loss of a short sample is developed. In coils and cables the magnetic field varies spatially. The position dependent field vector is calculated assuming a homogeneous current distribution. From this field profile and the transport current, the local AC loss is calculated. Integration over the conductor length yields the AC loss of the device. The total AC loss of the device is split up in different compone...
A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF--13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alpha and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample
Rabbers, J.J.; Haken, ten B.; Kate, ten H.H.J.
A method to calculate the AC loss of superconducting power devices from the measured AC loss of a short sample is developed. In coils and cables the magnetic field varies spatially. The position dependent field vector is calculated assuming a homogeneous current distribution. From this field profile
The validity of the transport approximation in critical-size and reactivity calculations. Elastically scattered neutrons are, in general, not distributed isotropically in the laboratory system, and a convenient way of taking this into account in neutron- transport calculations is to use the transport approximation. In this, the elastic cross-section is replaced by an elastic transport cross-section with an isotropic angular distribution. This leads to a considerable simplification in the neutron-transport calculation. In the present paper, the theoretical bases of the transport approximation in both one-group and many-group formalisms are given. The accuracy of the approximation is then studied in the multi-group case for a number of typical systems by means of the Sn method using the isotropic and anisotropic versions of the method, which exist as alternative options of the machine code SAINT written at Aldermaston for use on IBM-709/7090 machines. The dependence of the results of the anisotropic calculations on the number of moments used to represent the angular distributions is also examined. The results of the various calculations are discussed, and an indication is given of the types of system for which the transport approximation is adequate and of those for which it is inadequate. (author)
Grafström, Anton; Qualité, Lionel; Tillé, Yves; Matei, Alina
More than 50 methods have been developed to draw unequal probability samples with fixed sample size. All these methods require the sum of the inclusion probabilities to be an integer number. There are cases, however, where the sum of desired inclusion probabilities is not an integer. Then, classical algorithms for drawing samples cannot be directly applied. We present two methods to overcome the problem of sample selection with unequal inclusion probabilities when their sum is not an integer ...
Full Text Available A common measure of association between two variables x and y is the bivariate Pearson correlation coefficient rho(x,y that characterizes the strength and direction of any linear relationship between x and y. This article describes how to determine the optimal sample size for bivariate correlations,reviews available methods, and discusses their different ranges of applicability. A convenient equation is derived to help plan sample size for correlations by confidence interval analysis. In addition, a useful table for planning correlation studies is provided that gives sample sizes needed to achieve 95% confidence intervals (CI for correlation values ranging from 0.05 to 0.95 and for CI widths ranging from 0.1 to 0.9. Sample size requirements are considered for planning correlation studies.
Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.
This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.
Murray Moinester; Ruth Gottfried
A common measure of association between two variables x and y is the bivariate Pearson correlation coefficient rho(x,y) that characterizes the strength and direction of any linear relationship between x and y. This article describes how to determine the optimal sample size for bivariate correlations,reviews available methods, and discusses their different ranges of applicability. A convenient equation is derived to help plan sample size for correlations by confidence interval analysis. In add...
Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by as...
Kelley, Ken; Rausch, Joseph R.
Longitudinal studies are necessary to examine individual change over time, with group status often being an important variable in explaining some individual differences in change. Although sample size planning for longitudinal studies has focused on statistical power, recent calls for effect sizes and their corresponding confidence intervals…
Schoeneberger, Jason A.
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.
Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.
Chertkov, V Y
The objective of this work is a physical prediction of such soil shrinkage anisotropy characteristics as variation with drying of (i) different sample/layer sizes and (ii) the shrinkage geometry factor. With that, a new presentation of the shrinkage anisotropy concept is suggested through the sample/layer size ratios. The work objective is reached in two steps. First, the relations are derived between the indicated soil shrinkage anisotropy characteristics and three different shrinkage curves of a soil relating to: small samples (without cracking at shrinkage), sufficiently large samples (with internal cracking), and layers of similar thickness. Then, the results of a recent work with respect to the physical prediction of the three shrinkage curves are used. These results connect the shrinkage curves with the initial sample size/layer thickness as well as characteristics of soil texture and structure (both inter- and intra-aggregate) as physical parameters. The parameters determining the reference shrinkage c...
Johnson, Kenneth L.; White, K. Preston, Jr.
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
Geweke, John F.
Data augmentation and Gibbs sampling are two closely related, sampling-based approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accuracy of the approximations to the expected value of functions of interest under the posterior. In this paper methods for spectral analysis are used to evaluate numerical accuracy formally and construc...
González-Vacarezza, N; Abad-Santos, F; Carcas-Sansuan, A; Dorado, P; Peñas-Lledó, E; Estévez-Carrizo, F; Llerena, A
In bioequivalence studies, intra-individual variability (CV(w)) is critical in determining sample size. In particular, highly variable drugs may require enrollment of a greater number of subjects. We hypothesize that a strategy to reduce pharmacokinetic CV(w), and hence sample size and costs, would be to include subjects with decreased metabolic enzyme capacity for the drug under study. Therefore, two mirtazapine studies, two-way, two-period crossover design (n=68) were re-analysed to calculate the total CV(w) and the CV(w)s in three different CYP2D6 genotype groups (0, 1 and ≥ 2 active genes). The results showed that a 29.2 or 15.3% sample size reduction would have been possible if the recruitment had been of individuals carrying just 0 or 0 plus 1 CYP2D6 active genes, due to the lower CV(w). This suggests that there may be a role for pharmacogenetics in the design of bioequivalence studies to reduce sample size and costs, thus introducing a new paradigm for the biopharmaceutical evaluation of drug products. PMID:22733239
Abdalla Hashim, Ala’a Hayder; Eldin, AL-Hadi Mohi; Hashim, Hayder Abdalla
Background: The study of the mesiodistal size, the morphology of teeth and dental arch may play an important role in clinical dentistry, as well as other sciences such as Forensic Dentistry and Anthropology. Aims: The aims of the present study were to establish tooth-size ratio in Sudanese sample with Class I normal occlusion, to compare the tooth-size ratio between the present study and Bolton's study and between genders. Materials and Methods: The sample consisted of dental casts of 60 subj...
Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H2/O2/Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution
Finch, W. Holmes; Finch, Maria E. Hernandez
Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…
Ossiander, Frank J.; Wedemeyer, Gary
A computer program is described for generating the sample size tables required in fish hatchery disease inspection and certification. The program was designed to aid in detection of infectious pancreatic necrosis (IPN) in salmonids, but it is applicable to any fish disease inspection when the sampling plan follows the hypergeometric distribution.
The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…
Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…
Klebanov, Lev; Yakovlev, Andrei
Our answer to the question posed in the title is negative. This intentionally provocative note discusses the issue of sample size in microarray studies from several angles. We suggest that the current view of microarrays as no more than a screening tool be changed and small sample studies no longer be considered appropriate.
Fienen, Michael N.; Selbig, William R.
A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.
This paper presents the expected long-run cost per unit time for a system monitored by an adaptive control chart with variable sample sizes: if the control chart signals that the system is out of control, the sampling which follows will be conducted with a larger sample size. The system is supposed to have three states: in-control, out-of-control, and failed. Two levels of repair are applied to maintain the system. A minor repair will be conducted if an assignable cause is c...