#### Sample records for sample size calculation

1. How to calculate sample size and why.

Science.gov (United States)

Kim, Jeehyoung; Seo, Bong Soo

2013-09-01

Calculating the sample size is essential to reduce the cost of a study and to prove the hypothesis effectively. Referring to pilot studies and previous research studies, we can choose a proper hypothesis and simplify the studies by using a website or Microsoft Excel sheet that contains formulas for calculating sample size in the beginning stage of the study. There are numerous formulas for calculating the sample size for complicated statistics and studies, but most studies can use basic calculating methods for sample size calculation.

2. Sample size calculations for skewed distributions.

Science.gov (United States)

Cundill, Bonnie; Alexander, Neal D E

2015-04-02

Sample size calculations should correspond to the intended method of analysis. Nevertheless, for non-normal distributions, they are often done on the basis of normal approximations, even when the data are to be analysed using generalized linear models (GLMs). For the case of comparison of two means, we use GLM theory to derive sample size formulae, with particular cases being the negative binomial, Poisson, binomial, and gamma families. By simulation we estimate the performance of normal approximations, which, via the identity link, are special cases of our approach, and for common link functions such as the log. The negative binomial and gamma scenarios are motivated by examples in hookworm vaccine trials and insecticide-treated materials, respectively. Calculations on the link function (log) scale work well for the negative binomial and gamma scenarios examined and are often superior to the normal approximations. However, they have little advantage for the Poisson and binomial distributions. The proposed method is suitable for sample size calculations for comparisons of means of highly skewed outcome variables.

3. An expert system for the calculation of sample size.

Science.gov (United States)

Ebell, M H; Neale, A V; Hodgkins, B J

1994-06-01

Calculation of sample size is a useful technique for researchers who are designing a study, and for clinicians who wish to interpret research findings. The elements that must be specified to calculate the sample size include alpha, beta, Type I and Type II errors, 1- and 2-tail tests, confidence intervals, and confidence levels. A computer software program written by one of the authors (MHE), Sample Size Expert, facilitates sample size calculations. The program uses an expert system to help inexperienced users calculate sample sizes for analytic and descriptive studies. The software is available at no cost from the author or electronically via several on-line information services.

4. Preeminence and prerequisites of sample size calculations in clinical trials

Directory of Open Access Journals (Sweden)

Richa Singhal

2015-01-01

Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

5. Sample Size and Statistical Power Calculation in Genetic Association Studies

Directory of Open Access Journals (Sweden)

Eun Pyo Hong

2012-06-01

Full Text Available A sample size with sufficient statistical power is critical to the success of genetic association studies to detect causal genes of human complex diseases. Genome-wide association studies require much larger sample sizes to achieve an adequate statistical power. We estimated the statistical power with increasing numbers of markers analyzed and compared the sample sizes that were required in case-control studies and case-parent studies. We computed the effective sample size and statistical power using Genetic Power Calculator. An analysis using a larger number of markers requires a larger sample size. Testing a single-nucleotide polymorphism (SNP marker requires 248 cases, while testing 500,000 SNPs and 1 million markers requires 1,206 cases and 1,255 cases, respectively, under the assumption of an odds ratio of 2, 5% disease prevalence, 5% minor allele frequency, complete linkage disequilibrium (LD, 1:1 case/control ratio, and a 5% error rate in an allelic test. Under a dominant model, a smaller sample size is required to achieve 80% power than other genetic models. We found that a much lower sample size was required with a strong effect size, common SNP, and increased LD. In addition, studying a common disease in a case-control study of a 1:4 case-control ratio is one way to achieve higher statistical power. We also found that case-parent studies require more samples than case-control studies. Although we have not covered all plausible cases in study design, the estimates of sample size and statistical power computed under various assumptions in this study may be useful to determine the sample size in designing a population-based genetic association study.

6. Sample size and power calculation for molecular biology studies.

Science.gov (United States)

Jung, Sin-Ho

2010-01-01

Sample size calculation is a critical procedure when designing a new biological study. In this chapter, we consider molecular biology studies generating huge dimensional data. Microarray studies are typical examples, so that we state this chapter in terms of gene microarray data, but the discussed methods can be used for design and analysis of any molecular biology studies involving high-dimensional data. In this chapter, we discuss sample size calculation methods for molecular biology studies when the discovery of prognostic molecular markers is performed by accurately controlling false discovery rate (FDR) or family-wise error rate (FWER) in the final data analysis. We limit our discussion to the two-sample case.

7. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

Science.gov (United States)

Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

2017-09-14

While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey.

8. Variance estimation, design effects, and sample size calculations for respondent-driven sampling.

Science.gov (United States)

Salganik, Matthew J

2006-11-01

Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling.

9. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

Science.gov (United States)

2010-10-01

... applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than... Using Finite Population Correction The FPC is not applied when the sample is drawn from a population of... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up...

10. New method to estimate the sample size for calculation of a proportion assuming binomial distribution.

Science.gov (United States)

Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio

2013-10-01

Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation. Copyright © 2013 Elsevier Ltd. All rights reserved.

11. Sample size calculations for pilot randomized trials: a confidence interval approach.

Science.gov (United States)

Cocks, Kim; Torgerson, David J

2013-02-01

To describe a method using confidence intervals (CIs) to estimate the sample size for a pilot randomized trial. Using one-sided CIs and the estimated effect size that would be sought in a large trial, we calculated the sample size needed for pilot trials. Using an 80% one-sided CI, we estimated that a pilot trial should have at least 9% of the sample size of the main planned trial. Using the estimated effect size difference for the main trial and using a one-sided CI, this allows us to calculate a sample size for a pilot trial, which will make its results more useful than at present. Copyright © 2013 Elsevier Inc. All rights reserved.

12. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

Science.gov (United States)

Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

2008-01-01

Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

13. Effects of sample size on estimates of population growth rates calculated with matrix models.

Directory of Open Access Journals (Sweden)

Ian J Fiske

Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

14. Sample size calculations in clinical research should also be based on ethical principles

OpenAIRE

Cesana, Bruno Mario; Antonelli, Paolo

2016-01-01

Sample size calculations based on too narrow a width, or with lower and upper confidence limits bounded by fixed cut-off points, not only increase power-based sample sizes to ethically unacceptable levels (thus making research practically unfeasible) but also greatly increase the costs and burdens of clinical trials. We propose an alternative method of combining the power of a statistical test and the probability of obtaining adequate precision (the power of the confidence interval) with an a...

15. Sample size calculations in clinical research should also be based on ethical principles.

Science.gov (United States)

Cesana, Bruno Mario; Antonelli, Paolo

2016-03-18

Sample size calculations based on too narrow a width, or with lower and upper confidence limits bounded by fixed cut-off points, not only increase power-based sample sizes to ethically unacceptable levels (thus making research practically unfeasible) but also greatly increase the costs and burdens of clinical trials. We propose an alternative method of combining the power of a statistical test and the probability of obtaining adequate precision (the power of the confidence interval) with an acceptable increase in power-based sample sizes.

16. Sample size and power calculations based on generalized linear mixed models with correlated binary outcomes.

Science.gov (United States)

Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R

2008-08-01

The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.

17. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

Science.gov (United States)

Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

2013-08-01

Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.

18. Sample size calculation for differential expression analysis of RNA-seq data under Poisson distribution.

Science.gov (United States)

Li, Chung-I; Su, Pei-Fang; Guo, Yan; Shyr, Yu

2013-01-01

Sample size determination is an important issue in the experimental design of biomedical research. Because of the complexity of RNA-seq experiments, however, the field currently lacks a sample size method widely applicable to differential expression studies utilising RNA-seq technology. In this report, we propose several methods for sample size calculation for single-gene differential expression analysis of RNA-seq data under Poisson distribution. These methods are then extended to multiple genes, with consideration for addressing the multiple testing problem by controlling false discovery rate. Moreover, most of the proposed methods allow for closed-form sample size formulas with specification of the desired minimum fold change and minimum average read count, and thus are not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size formulas are presented; the results indicate that our methods work well, with achievement of desired power. Finally, our sample size calculation methods are applied to three real RNA-seq data sets.

19. Reference calculation of light propagation between parallel planes of different sizes and sampling rates.

Science.gov (United States)

Lobaz, Petr

2011-01-03

The article deals with a method of calculation of off-axis light propagation between parallel planes using discretization of the Rayleigh-Sommerfeld integral and its implementation by fast convolution. It analyses zero-padding in case of different plane sizes. In case of memory restrictions, it suggests splitting the calculation into tiles and shows that splitting leads to a faster calculation when plane sizes are a lot different. Next, it suggests how to calculate propagation in case of different sampling rates by splitting planes into interleaved tiles and shows this to be faster than zero-padding and direct calculation. Neither the speedup nor memory-saving method decreases accuracy; the aim of the proposed method is to provide reference data that can be compared to the results of faster and less precise methods.

20. [On the impact of sample size calculation and power in clinical research].

Science.gov (United States)

Held, Ulrike

2014-10-01

The aim of a clinical trial is to judge the efficacy of a new therapy or drug. In the planning phase of the study, the calculation of the necessary sample size is crucial in order to obtain a meaningful result. The study design, the expected treatment effect in outcome and its variability, power and level of significance are factors which determine the sample size. It is often difficult to fix these parameters prior to the start of the study, but related papers from the literature can be helpful sources for the unknown quantities. For scientific as well as ethical reasons it is necessary to calculate the sample size in advance in order to be able to answer the study question.

1. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

Science.gov (United States)

Krishnamoorthy, K.; Xia, Yanping

2008-01-01

The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

2. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

Science.gov (United States)

Li, Zhushan

2014-01-01

Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

3. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

Science.gov (United States)

Shieh, Gwowen

2007-01-01

The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

4. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

Science.gov (United States)

Fu, Yingkun; Xie, Yanming

2011-10-01

In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

5. n4Studies: Sample Size Calculation for an Epidemiological Study on a Smart Device

Directory of Open Access Journals (Sweden)

Chetta Ngamjarus

2016-05-01

Full Text Available Objective: This study was to develop a sample size application (called “n4Studies” for free use on iPhone and Android devices and to compare sample size functions between n4Studies with other applications and software. Methods: Objective-C programming language was used to create the application for the iPhone OS (operating system while javaScript, jquery mobile, PhoneGap and jstat were used to develop it for Android phones. Other sample size applications were searched from the Apple app and Google play stores. The applications’ characteristics and sample size functions were collected. Spearman’s rank correlation was used to investigate the relationship between number of sample size functions and price. Results: “n4Studies” provides several functions for sample size and power calculations for various epidemiological study designs. It can be downloaded from the Apple App and Google play store. Comparing n4Studies with other applications, it covers several more types of epidemiological study designs, gives similar results for estimation of infinite/finite population mean and infinite/finite proportion from GRANMO, for comparing two independent means from BioStats, for comparing two independent proportions from EpiCal application. When using the same parameters, n4Studies gives similar results to STATA, epicalc package in R, PS, G*Power, and OpenEpi. Conclusion: “n4Studies” can be an alternative tool for calculating the sample size. It may be useful to students, lecturers and researchers in conducting their research projects.

6. Sample size calculations for clinical trials targeting tauopathies: A new potential disease target

Science.gov (United States)

Whitwell, Jennifer L.; Duffy, Joseph R.; Strand, Edythe A.; Machulda, Mary M.; Tosakulwong, Nirubol; Weigand, Stephen D.; Senjem, Matthew L.; Spychalla, Anthony J.; Gunter, Jeffrey L.; Petersen, Ronald C.; Jack, Clifford R.; Josephs, Keith A.

2015-01-01

Disease-modifying therapies are being developed to target tau pathology, and should, therefore, be tested in primary tauopathies. We propose that progressive apraxia of speech should be considered one such target group. In this study, we investigate potential neuroimaging and clinical outcome measures for progressive apraxia of speech and determine sample size estimates for clinical trials. We prospectively recruited 24 patients with progressive apraxia of speech who underwent two serial MRI with an interval of approximately two years. Detailed speech and language assessments included the Apraxia of Speech Rating Scale (ASRS) and Motor Speech Disorders (MSD) severity scale. Rates of ventricular expansion and rates of whole brain, striatal and midbrain atrophy were calculated. Atrophy rates across 38 cortical regions were also calculated and the regions that best differentiated patients from controls were selected. Sample size estimates required to power placebo-controlled treatment trials were calculated. The smallest sample size estimates were obtained with rates of atrophy of the precentral gyrus and supplementary motor area, with both measures requiring less than 50 subjects per arm to detect a 25% treatment effect with 80% power. These measures outperformed the other regional and global MRI measures and the clinical scales. Regional rates of cortical atrophy therefore provide the best outcome measures in progressive apraxia of speech. The small sample size estimates demonstrate feasibility for including progressive apraxia of speech in future clinical treatment trials targeting tau. PMID:26076744

7. Sample size calculation for microarray experiments with blocked one-way design

Directory of Open Access Journals (Sweden)

Jung Sin-Ho

2009-05-01

Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.

8. Sample size calculations for evaluating treatment policies in multi-stage designs.

Science.gov (United States)

Dawson, Ree; Lavori, Philip W

2010-12-01

Sequential multiple assignment randomized (SMAR) designs are used to evaluate treatment policies, also known as adaptive treatment strategies (ATS). The determination of SMAR sample sizes is challenging because of the sequential and adaptive nature of ATS, and the multi-stage randomized assignment used to evaluate them. We derive sample size formulae appropriate for the nested structure of successive SMAR randomizations. This nesting gives rise to ATS that have overlapping data, and hence between-strategy covariance. We focus on the case when covariance is substantial enough to reduce sample size through improved inferential efficiency. Our design calculations draw upon two distinct methodologies for SMAR trials, using the equality of the optimal semi-parametric and Bayesian predictive estimators of standard error. This 'hybrid' approach produces a generalization of the t-test power calculation that is carried out in terms of effect size and regression quantities familiar to the trialist. Simulation studies support the reasonableness of underlying assumptions as well as the adequacy of the approximation to between-strategy covariance when it is substantial. Investigation of the sensitivity of formulae to misspecification shows that the greatest influence is due to changes in effect size, which is an a priori clinical judgment on the part of the trialist. We have restricted simulation investigation to SMAR studies of two and three stages, although the methods are fully general in that they apply to 'K-stage' trials. Practical guidance is needed to allow the trialist to size a SMAR design using the derived methods. To this end, we define ATS to be 'distinct' when they differ by at least the (minimal) size of effect deemed to be clinically relevant. Simulation results suggest that the number of subjects needed to distinguish distinct strategies will be significantly reduced by adjustment for covariance only when small effects are of interest.

9. A simulation-based sample size calculation method for pre-clinical tumor xenograft experiments.

Science.gov (United States)

Wu, Jianrong; Yang, Shengping

2017-04-07

Pre-clinical tumor xenograft experiments usually require a small sample size that is rarely greater than 20, and data generated from such experiments very often do not have censored observations. Many statistical tests can be used for analyzing such data, but most of them were developed based on large sample approximation. We demonstrate that the type-I error rates of these tests can substantially deviate from the designated rate, especially when the data to be analyzed has a skewed distribution. Consequently, the sample size calculated based on these tests can be erroneous. We propose a modified signed log-likelihood ratio test (MSLRT) to meet the type-I error rate requirement for analyzing pre-clinical tumor xenograft data. The MSLRT has a consistent and symmetric type-I error rate that is very close to the designated rate for a wide range of sample sizes. By simulation, we generated a series of sample size tables based on scenarios commonly expected in tumor xenograft experiments, and we expect that these tables can be used as guidelines for making decisions on the numbers of mice used in tumor xenograft experiments.

10. Sample size calculations for randomised trials including both independent and paired data.

Science.gov (United States)

Yelland, Lisa N; Sullivan, Thomas R; Price, David J; Lee, Katherine J

2017-04-15

Randomised trials including a mixture of independent and paired data arise in many areas of health research, yet methods for determining the sample size for such trials are lacking. We derive design effects algebraically assuming clustering because of paired data will be taken into account in the analysis using generalised estimating equations with either an independence or exchangeable working correlation structure. Continuous and binary outcomes are considered, along with three different methods of randomisation: cluster randomisation, individual randomisation and randomisation to opposite treatment groups. The design effect is shown to depend on the intracluster correlation coefficient, proportion of observations belonging to a pair, working correlation structure, type of outcome and method of randomisation. The derived design effects are validated through simulation and example calculations are presented to illustrate their use in sample size planning. These design effects will enable appropriate sample size calculations to be performed for future randomised trials including both independent and paired data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

11. Power and sample size calculation for paired recurrent events data based on robust nonparametric tests.

Science.gov (United States)

Su, Pei-Fang; Chung, Chia-Hua; Wang, Yu-Wen; Chi, Yunchan; Chang, Ying-Ju

2017-05-20

The purpose of this paper is to develop a formula for calculating the required sample size for paired recurrent events data. The developed formula is based on robust non-parametric tests for comparing the marginal mean function of events between paired samples. This calculation can accommodate the associations among a sequence of paired recurrent event times with a specification of correlated gamma frailty variables for a proportional intensity model. We evaluate the performance of the proposed method with comprehensive simulations including the impacts of paired correlations, homogeneous or nonhomogeneous processes, marginal hazard rates, censoring rate, accrual and follow-up times, as well as the sensitivity analysis for the assumption of the frailty distribution. The use of the formula is also demonstrated using a premature infant study from the neonatal intensive care unit of a tertiary center in southern Taiwan. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

12. Sample-size calculations for multi-group comparison in population pharmacokinetic experiments.

Science.gov (United States)

Ogungbenro, Kayode; Aarons, Leon

2010-01-01

This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi-group comparison detecting the difference in parameters between groups under mixed-effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.

13. Sample size calculations for micro-randomized trials in mHealth.

Science.gov (United States)

Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A

2016-05-30

The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

14. Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !

NARCIS (Netherlands)

van Breukelen, Gerard J.P.; Candel, Math J.J.M.

2012-01-01

Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given

15. On tests of treatment-covariate interactions: An illustration of appropriate power and sample size calculations.

Science.gov (United States)

Shieh, Gwowen

2017-01-01

The appraisals of treatment-covariate interaction have theoretical and substantial implications in all scientific fields. Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA. A fundamental assumption of ANCOVA is that the regression slopes associating the response variable with the covariate variable are presumed constant across treatment groups. The validity of homogeneous regression slopes accordingly is the most essential concern in traditional ANCOVA and inevitably determines the practical usefulness of research findings. In view of the limited results in current literature, this article aims to present power and sample size procedures for tests of heterogeneity between two regression slopes with particular emphasis on the stochastic feature of covariate variables. Theoretical implications and numerical investigations are presented to explicate the utility and advantage for accommodating covariate properties. The exact approach has the distinct feature of accommodating the full distributional properties of normal covariates whereas the simplified approximate methods only utilize the partial information of covariate variances. According to the overall accuracy and robustness, the exact approach is recommended over the approximate methods as a reliable tool in practical applications. The suggested power and sample size calculations can be implemented with the supplemental SAS and R programs.

16. On tests of treatment-covariate interactions: An illustration of appropriate power and sample size calculations.

Directory of Open Access Journals (Sweden)

Gwowen Shieh

Full Text Available The appraisals of treatment-covariate interaction have theoretical and substantial implications in all scientific fields. Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA. A fundamental assumption of ANCOVA is that the regression slopes associating the response variable with the covariate variable are presumed constant across treatment groups. The validity of homogeneous regression slopes accordingly is the most essential concern in traditional ANCOVA and inevitably determines the practical usefulness of research findings. In view of the limited results in current literature, this article aims to present power and sample size procedures for tests of heterogeneity between two regression slopes with particular emphasis on the stochastic feature of covariate variables. Theoretical implications and numerical investigations are presented to explicate the utility and advantage for accommodating covariate properties. The exact approach has the distinct feature of accommodating the full distributional properties of normal covariates whereas the simplified approximate methods only utilize the partial information of covariate variances. According to the overall accuracy and robustness, the exact approach is recommended over the approximate methods as a reliable tool in practical applications. The suggested power and sample size calculations can be implemented with the supplemental SAS and R programs.

17. Determination of reference limits: statistical concepts and tools for sample size calculation.

Science.gov (United States)

Wellek, Stefan; Lackner, Karl J; Jennen-Steinmetz, Christine; Reinhard, Iris; Hoffmann, Isabell; Blettner, Maria

2014-12-01

Reference limits are estimators for 'extreme' percentiles of the distribution of a quantitative diagnostic marker in the healthy population. In most cases, interest will be in the 90% or 95% reference intervals. The standard parametric method of determining reference limits consists of computing quantities of the form X̅±c·S. The proportion of covered values in the underlying population coincides with the specificity obtained when a measurement value falling outside the corresponding reference region is classified as diagnostically suspect. Nonparametrically, reference limits are estimated by means of so-called order statistics. In both approaches, the precision of the estimate depends on the sample size. We present computational procedures for calculating minimally required numbers of subjects to be enrolled in a reference study. The much more sophisticated concept of reference bands replacing statistical reference intervals in case of age-dependent diagnostic markers is also discussed.

18. Mixed modeling and sample size calculations for identifying housekeeping genes.

Science.gov (United States)

Dai, Hongying; Charnigo, Richard; Vyhlidal, Carrie A; Jones, Bridgette L; Bhandary, Madhusudan

2013-08-15

Normalization of gene expression data using internal control genes that have biologically stable expression levels is an important process for analyzing reverse transcription polymerase chain reaction data. We propose a three-way linear mixed-effects model to select optimal housekeeping genes. The mixed-effects model can accommodate multiple continuous and/or categorical variables with sample random effects, gene fixed effects, systematic effects, and gene by systematic effect interactions. We propose using the intraclass correlation coefficient among gene expression levels as the stability measure to select housekeeping genes that have low within-sample variation. Global hypothesis testing is proposed to ensure that selected housekeeping genes are free of systematic effects or gene by systematic effect interactions. A gene combination with the highest lower bound of 95% confidence interval for intraclass correlation coefficient and no significant systematic effects is selected for normalization. Sample size calculation based on the estimation accuracy of the stability measure is offered to help practitioners design experiments to identify housekeeping genes. We compare our methods with geNorm and NormFinder by using three case studies. A free software package written in SAS (Cary, NC, U.S.A.) is available at http://d.web.umkc.edu/daih under software tab. Copyright © 2013 John Wiley & Sons, Ltd.

19. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

Science.gov (United States)

Lee, Paul H; Tse, Andy C Y

2017-05-01

There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

20. Sample size calculation based on exact test for assessing differential expression analysis in RNA-seq data.

Science.gov (United States)

Li, Chung-I; Su, Pei-Fang; Shyr, Yu

2013-12-06

Sample size calculation is an important issue in the experimental design of biomedical research. For RNA-seq experiments, the sample size calculation method based on the Poisson model has been proposed; however, when there are biological replicates, RNA-seq data could exhibit variation significantly greater than the mean (i.e. over-dispersion). The Poisson model cannot appropriately model the over-dispersion, and in such cases, the negative binomial model has been used as a natural extension of the Poisson model. Because the field currently lacks a sample size calculation method based on the negative binomial model for assessing differential expression analysis of RNA-seq data, we propose a method to calculate the sample size. We propose a sample size calculation method based on the exact test for assessing differential expression analysis of RNA-seq data. The proposed sample size calculation method is straightforward and not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size method are presented; the results indicate our method works well, with achievement of desired power.

1. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

Energy Technology Data Exchange (ETDEWEB)

Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

1996-12-31

A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

2. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

Science.gov (United States)

Bi, Ran; Liu, Peng

2016-03-31

RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

3. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

DEFF Research Database (Denmark)

Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

2008-01-01

of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...... in publications, sample size calculations and statistical methods were often explicitly discrepant with the protocol or not pre-specified. Such amendments were rarely acknowledged in the trial publication. The reliability of trial reports cannot be assessed without having access to the full protocols......OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...

4. Sample size calculation based on generalized linear models for differential expression analysis in RNA-seq data.

Science.gov (United States)

Li, Chung-I; Shyr, Yu

2016-12-01

As RNA-seq rapidly develops and costs continually decrease, the quantity and frequency of samples being sequenced will grow exponentially. With proteomic investigations becoming more multivariate and quantitative, determining a study's optimal sample size is now a vital step in experimental design. Current methods for calculating a study's required sample size are mostly based on the hypothesis testing framework, which assumes each gene count can be modeled through Poisson or negative binomial distributions; however, these methods are limited when it comes to accommodating covariates. To address this limitation, we propose an estimating procedure based on the generalized linear model. This easy-to-use method constructs a representative exemplary dataset and estimates the conditional power, all without requiring complicated mathematical approximations or formulas. Even more attractive, the downstream analysis can be performed with current R/Bioconductor packages. To demonstrate the practicability and efficiency of this method, we apply it to three real-world studies, and introduce our on-line calculator developed to determine the optimal sample size for a RNA-seq study.

5. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

Directory of Open Access Journals (Sweden)

Finch Stephen J

2005-04-01

Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

6. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

Science.gov (United States)

Dong, Nianbo; Maynard, Rebecca

2013-01-01

This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

7. Current practice in methodology and reporting of the sample size calculation in randomised trials of hip and knee osteoarthritis: a protocol for a systematic review.

Science.gov (United States)

Copsey, Bethan; Dutton, Susan; Fitzpatrick, Ray; Lamb, Sarah E; Cook, Jonathan A

2017-10-10

A key aspect of the design of randomised controlled trials (RCTs) is determining the sample size. It is important that the trial sample size is appropriately calculated. The required sample size will differ by clinical area, for instance, due to the prevalence of the condition and the choice of primary outcome. Additionally, it will depend upon the choice of target difference assumed in the calculation. Focussing upon the hip and knee osteoarthritis population, this study aims to systematically review how the trial size was determined for trials of osteoarthritis, on what basis, and how well these aspects are reported. Several electronic databases (Medline, Cochrane library, CINAHL, EMBASE, PsycINFO, PEDro and AMED) will be searched to identify articles on RCTs of hip and knee osteoarthritis published in 2016. Articles will be screened for eligibility and data extracted independently by two reviewers. Data will be extracted on study characteristics (design, population, intervention and control treatments), primary outcome, chosen sample size and justification, parameters used to calculate the sample size (including treatment effect in control arm, level of variability in primary outcome, loss to follow-up rates). Data will be summarised across the studies using appropriate summary statistics (e.g. n and %, median and interquartile range). The proportion of studies which report each key component of the sample size calculation will be presented. The reproducibility of the sample size calculation will be tested. The findings of this systematic review will summarise the current practice for sample size calculation in trials of hip and knee osteoarthritis. It will also provide evidence on the completeness of the reporting of the sample size calculation, reproducibility of the chosen sample size and the basis for the values used in the calculation. As this review was not eligible to be registered on PROSPERO, the summary information was uploaded to Figshare to make it

8. Sample size for beginners.

OpenAIRE

Florey, C D

1993-01-01

The common failure to include an estimation of sample size in grant proposals imposes a major handicap on applicants, particularly for those proposing work in any aspect of research in the health services. Members of research committees need evidence that a study is of adequate size for there to be a reasonable chance of a clear answer at the end. A simple illustrated explanation of the concepts in determining sample size should encourage the faint hearted to pay more attention to this increa...

9. Sample size methodology

CERN Document Server

Desu, M M

2012-01-01

One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

10. Determination of Sample Size

OpenAIRE

Naing, Nyi Nyi

2003-01-01

There is a particular importance of determining a basic minimum required ‘n’ size of the sample to recognize a particular measurement of a particular population. This article has highlighted the determination of an appropriate size to estimate population parameters.

11. Sample size for beginners.

Science.gov (United States)

Florey, C D

1993-05-01

The common failure to include an estimation of sample size in grant proposals imposes a major handicap on applicants, particularly for those proposing work in any aspect of research in the health services. Members of research committees need evidence that a study is of adequate size for there to be a reasonable chance of a clear answer at the end. A simple illustrated explanation of the concepts in determining sample size should encourage the faint hearted to pay more attention to this increasingly important aspect of grantsmanship.

12. Ethics and sample size.

Science.gov (United States)

Bacchetti, Peter; Wolf, Leslie E; Segal, Mark R; McCulloch, Charles E

2005-01-15

The belief is widespread that studies are unethical if their sample size is not large enough to ensure adequate power. The authors examine how sample size influences the balance that determines the ethical acceptability of a study: the balance between the burdens that participants accept and the clinical or scientific value that a study can be expected to produce. The average projected burden per participant remains constant as the sample size increases, but the projected study value does not increase as rapidly as the sample size if it is assumed to be proportional to power or inversely proportional to confidence interval width. This implies that the value per participant declines as the sample size increases and that smaller studies therefore have more favorable ratios of projected value to participant burden. The ethical treatment of study participants therefore does not require consideration of whether study power is less than the conventional goal of 80% or 90%. Lower power does not make a study unethical. The analysis addresses only ethical acceptability, not optimality; large studies may be desirable for other than ethical reasons.

13. A practical simulation method to calculate sample size of group sequential trials for time-to-event data under exponential and Weibull distribution.

Directory of Open Access Journals (Sweden)

Zhiwei Jiang

Full Text Available Group sequential design has been widely applied in clinical trials in the past few decades. The sample size estimation is a vital concern of sponsors and investigators. Especially in the survival group sequential trials, it is a thorny question because of its ambiguous distributional form, censored data and different definition of information time. A practical and easy-to-use simulation-based method is proposed for multi-stage two-arm survival group sequential design in the article and its SAS program is available. Besides the exponential distribution, which is usually assumed for survival data, the Weibull distribution is considered here. The incorporation of the probability of discontinuation in the simulation leads to the more accurate estimate. The assessment indexes calculated in the simulation are helpful to the determination of number and timing of the interim analysis. The use of the method in the survival group sequential trials is illustrated and the effects of the varied shape parameter on the sample size under the Weibull distribution are explored by employing an example. According to the simulation results, a method to estimate the shape parameter of the Weibull distribution is proposed based on the median survival time of the test drug and the hazard ratio, which are prespecified by the investigators and other participants. 10+ simulations are recommended to achieve the robust estimate of the sample size. Furthermore, the method is still applicable in adaptive design if the strategy of sample size scheme determination is adopted when designing or the minor modifications on the program are made.

14. A Practical Simulation Method to Calculate Sample Size of Group Sequential Trials for Time-to-Event Data under Exponential and Weibull Distribution

Science.gov (United States)

Jiang, Zhiwei; Wang, Ling; Li, Chanjuan; Xia, Jielai; Jia, Hongxia

2012-01-01

Group sequential design has been widely applied in clinical trials in the past few decades. The sample size estimation is a vital concern of sponsors and investigators. Especially in the survival group sequential trials, it is a thorny question because of its ambiguous distributional form, censored data and different definition of information time. A practical and easy-to-use simulation-based method is proposed for multi-stage two-arm survival group sequential design in the article and its SAS program is available. Besides the exponential distribution, which is usually assumed for survival data, the Weibull distribution is considered here. The incorporation of the probability of discontinuation in the simulation leads to the more accurate estimate. The assessment indexes calculated in the simulation are helpful to the determination of number and timing of the interim analysis. The use of the method in the survival group sequential trials is illustrated and the effects of the varied shape parameter on the sample size under the Weibull distribution are explored by employing an example. According to the simulation results, a method to estimate the shape parameter of the Weibull distribution is proposed based on the median survival time of the test drug and the hazard ratio, which are prespecified by the investigators and other participants. 10+ simulations are recommended to achieve the robust estimate of the sample size. Furthermore, the method is still applicable in adaptive design if the strategy of sample size scheme determination is adopted when designing or the minor modifications on the program are made. PMID:22957040

15. Basic Statistical Concepts for Sample Size Estimation

Directory of Open Access Journals (Sweden)

Vithal K Dhulkhed

2008-01-01

Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.

16. Sample size determination and power

CERN Document Server

Ryan, Thomas P, Jr

2013-01-01

THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

17. Simple BASIC program for calculating the cervicovaginal FNP and for estimating the sample size of the number of cervicovaginal smears to be rescreened.

Science.gov (United States)

Lo, J W; Fung, C H

1999-01-01

To guide cytotechnologists and pathologists in calculating the false negative proportion, or rate, and the number of Papanicolaou smears to be reevaluated for a meaningful assessment of screening performance, a computer program written in BASIC was prepared, based on several recent publications in the field of cytopathology. A complete program listing and sample runs to help users be cognizant of the necessary inputs to run the program. The output from the program gives the results of the various calculations. Since the tedious manual calculations are handled by the computer program, it is more likely for those involved in the interpretation of Papanicolaou smears to follow the approaches suggested by experts in these two areas.

18. Biostatistics Series Module 5: Determining Sample Size.

Science.gov (United States)

Hazra, Avijit; Gogtay, Nithya

2016-01-01

Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 - β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the

19. Improving your Hypothesis Testing: Determining Sample Sizes.

Science.gov (United States)

Luftig, Jeffrey T.; Norton, Willis P.

1982-01-01

This article builds on an earlier discussion of the importance of the Type II error (beta) and power to the hypothesis testing process (CE 511 484), and illustrates the methods by which sample size calculations should be employed so as to improve the research process. (Author/CT)

20. Sample size for morphological traits of pigeonpea

Directory of Open Access Journals (Sweden)

Giovani Facco

2015-12-01

Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons.

1. Angular size-redshift: Experiment and calculation

Science.gov (United States)

Amirkhanyan, V. R.

2014-10-01

In this paper the next attempt is made to clarify the nature of the Euclidean behavior of the boundary in the angular size-redshift cosmological test. It is shown experimentally that this can be explained by the selection determined by anisotropic morphology and anisotropic radiation of extended radio sources. A catalogue of extended radio sources with minimal flux densities of about 0.01 Jy at 1.4 GHz was compiled for conducting the test. Without the assumption of their size evolution, the agreement between the experiment and calculation was obtained both in the ΛCDM model (Ω m = 0.27, Ω v = 0.73) and the Friedman model (Ω = 0.1).

2. How Sample Size Affects a Sampling Distribution

Science.gov (United States)

Mulekar, Madhuri S.; Siegel, Murray H.

2009-01-01

If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

3. Planning Educational Research: Determining the Necessary Sample Size.

Science.gov (United States)

Olejnik, Stephen F.

1984-01-01

This paper discusses the sample size problem and four factors affecting its solution: significance level, statistical power, analysis procedure, and effect size. The interrelationship between these factors is discussed and demonstrated by calculating minimal sample size requirements for a variety of research conditions. (Author)

4. Determining sample size when assessing mean equivalence.

Science.gov (United States)

Asberg, Arne; Solem, Kristine B; Mikkelsen, Gustav

2014-11-01

When we want to assess whether two analytical methods are equivalent, we could test if the difference between the mean results is within the specification limits of 0 ± an acceptance criterion. Testing the null hypothesis of zero difference is less interesting, and so is the sample size estimation based on testing that hypothesis. Power function curves for equivalence testing experiments are not widely available. In this paper we present power function curves to help decide on the number of measurements when testing equivalence between the means of two analytical methods. Computer simulation was used to calculate the probability that the 90% confidence interval for the difference between the means of two analytical methods would exceed the specification limits of 0 ± 1, 0 ± 2 or 0 ± 3 analytical standard deviations (SDa), respectively. The probability of getting a nonequivalence alarm increases with increasing difference between the means when the difference is well within the specification limits. The probability increases with decreasing sample size and with smaller acceptance criteria. We may need at least 40-50 measurements with each analytical method when the specification limits are 0 ± 1 SDa, and 10-15 and 5-10 when the specification limits are 0 ± 2 and 0 ± 3 SDa, respectively. The power function curves provide information of the probability of false alarm, so that we can decide on the sample size under less uncertainty.

5. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

Science.gov (United States)

Heckmann, Tobias; Gegg, Katharina; Becht, Michael

2013-04-01

Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

6. Sample size determination in clinical trials with multiple endpoints

CERN Document Server

Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

2015-01-01

This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

7. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

Science.gov (United States)

2016-01-01

Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

8. Sample size determination for the fluctuation experiment.

Science.gov (United States)

Zheng, Qi

2017-01-01

The Luria-Delbrück fluctuation experiment protocol is increasingly employed to determine microbial mutation rates in the laboratory. An important question raised at the planning stage is "How many cultures are needed?" For over 70 years sample sizes have been determined either by intuition or by following published examples where sample sizes were chosen intuitively. This paper proposes a practical method for determining the sample size. The proposed method relies on existing algorithms for computing the expected Fisher information under two commonly used mutant distributions. The role of partial plating in reducing sample size is discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

9. Methods for sample size determination in cluster randomized trials.

Science.gov (United States)

Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

2015-06-01

The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

10. Additional Considerations in Determining Sample Size.

Science.gov (United States)

Levin, Joel R.; Subkoviak, Michael J.

Levin's (1975) sample-size determination procedure for completely randomized analysis of variance designs is extended to designs in which antecedent or blocking variables information is considered. In particular, a researcher's choice of designs is framed in terms of determining the respective sample sizes necessary to detect specified contrasts…

11. Determining Sample Size for Research Activities

Science.gov (United States)

Krejcie, Robert V.; Morgan, Daryle W.

1970-01-01

A formula for determining sample size, which originally appeared in 1960, has lacked a table for easy reference. This article supplies a graph of the function and a table of values which permits easy determination of the size of sample needed to be representative of a given population. (DG)

12. Sample size in qualitative interview studies

DEFF Research Database (Denmark)

Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

2016-01-01

Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...

13. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

Science.gov (United States)

Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

2014-01-01

Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

14. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

Science.gov (United States)

Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

2014-01-01

The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

15. Particle size distribution in ground biological samples.

Science.gov (United States)

Koglin, D; Backhaus, F; Schladot, J D

1997-05-01

Modern trace and retrospective analysis of Environmental Specimen Bank (ESB) samples require surplus material prepared and characterized as reference materials. Before the biological samples could be analyzed and stored for long periods at cryogenic temperatures, the materials have to be pre-crushed. As a second step, a milling and homogenization procedure has to follow. For this preparation, a grinding device is cooled with liquid nitrogen to a temperature of -190 degrees C. It is a significant condition for homogeneous samples that at least 90% of the particles should be smaller than 200 microns. In the German ESB the particle size distribution of the processed material is determined by means of a laser particle sizer. The decrease of particle sizes of deer liver and bream muscles after different grinding procedures as well as the consequences of ultrasonic treatment of the sample before particle size measurements have been investigated.

16. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

Directory of Open Access Journals (Sweden)

R. Eric Heidel

2016-01-01

Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

17. Sample size considerations for clinical research studies in nuclear cardiology.

Science.gov (United States)

Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

2015-12-01

Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

18. Determining sample size for tree utilization surveys

Science.gov (United States)

Stanley J. Zarnoch; James W. Bentley; Tony G. Johnson

2004-01-01

The U.S. Department of Agriculture Forest Service has conducted many studies to determine what proportion of the timber harvested in the South is actually utilized. This paper describes the statistical methods used to determine required sample sizes for estimating utilization ratios for a required level of precision. The data used are those for 515 hardwood and 1,557...

19. Predicting sample size required for classification performance

Directory of Open Access Journals (Sweden)

Figueroa Rosa L

2012-02-01

Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

20. Disk calculator indicates legible lettering size for slide projection

Science.gov (United States)

Hultberg, R. R.

1965-01-01

Hand-operated disk calculator indicates the minimum size of letters and numbers in relation to the width and height of a working drawing. The lettering is legible when a slide of the drawing is projected.

1. Uncertainty of the sample size reduction step in pesticide residue analysis of large-sized crops.

Science.gov (United States)

Omeroglu, P Yolci; Ambrus, Á; Boyacioglu, D; Majzik, E Solymosne

2013-01-01

To estimate the uncertainty of the sample size reduction step, each unit in laboratory samples of papaya and cucumber was cut into four segments in longitudinal directions and two opposite segments were selected for further homogenisation while the other two were discarded. Jackfruit was cut into six segments in longitudinal directions, and all segments were kept for further analysis. To determine the pesticide residue concentrations in each segment, they were individually homogenised and analysed by chromatographic methods. One segment from each unit of the laboratory sample was drawn randomly to obtain 50 theoretical sub-samples with an MS Office Excel macro. The residue concentrations in a sub-sample were calculated from the weight of segments and the corresponding residue concentration. The coefficient of variation calculated from the residue concentrations of 50 sub-samples gave the relative uncertainty resulting from the sample size reduction step. The sample size reduction step, which is performed by selecting one longitudinal segment from each unit of the laboratory sample, resulted in relative uncertainties of 17% and 21% for field-treated jackfruits and cucumber, respectively, and 7% for post-harvest treated papaya. The results demonstrated that sample size reduction is an inevitable source of uncertainty in pesticide residue analysis of large-sized crops. The post-harvest treatment resulted in a lower variability because the dipping process leads to a more uniform residue concentration on the surface of the crops than does the foliar application of pesticides.

2. Mongoloid-Caucasoid Differences in Brain Size from Military Samples.

Science.gov (United States)

Rushton, J. Philippe; And Others

1991-01-01

Calculation of cranial capacities for the means from 4 Mongoloid and 20 Caucasoid samples (raw data from 57,378 individuals in 1978) found larger brain size for Mongoloids, a finding discussed in evolutionary terms. The conclusion is disputed by L. Willerman but supported by J. P. Rushton. (SLD)

3. Neuromuscular dose-response studies: determining sample size.

Science.gov (United States)

Kopman, A F; Lien, C A; Naguib, M

2011-02-01

Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

4. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

Science.gov (United States)

Morgan, Timothy M; Case, L Douglas

2013-07-05

In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

5. Sample size formulae for the Bayesian continual reassessment method.

Science.gov (United States)

Cheung, Ying Kuen

2013-01-01

In the planning of a dose finding study, a primary design objective is to maintain high accuracy in terms of the probability of selecting the maximum tolerated dose. While numerous dose finding methods have been proposed in the literature, concrete guidance on sample size determination is lacking. With a motivation to provide quick and easy calculations during trial planning, we present closed form formulae for sample size determination associated with the use of the Bayesian continual reassessment method (CRM). We examine the sampling distribution of a nonparametric optimal design and exploit it as a proxy to empirically derive an accuracy index of the CRM using linear regression. We apply the formulae to determine the sample size of a phase I trial of PTEN-long in pancreatic cancer patients and demonstrate that the formulae give results very similar to simulation. The formulae are implemented by an R function 'getn' in the package 'dfcrm'. The results are developed for the Bayesian CRM and should be validated by simulation when used for other dose finding methods. The analytical formulae we propose give quick and accurate approximation of the required sample size for the CRM. The approach used to derive the formulae can be applied to obtain sample size formulae for other dose finding methods.

6. Defining sample size and sampling strategy for dendrogeomorphic rockfall reconstructions

Science.gov (United States)

Morel, Pauline; Trappmann, Daniel; Corona, Christophe; Stoffel, Markus

2015-05-01

Optimized sampling strategies have been recently proposed for dendrogeomorphic reconstructions of mass movements with a large spatial footprint, such as landslides, snow avalanches, and debris flows. Such guidelines have, by contrast, been largely missing for rockfalls and cannot be transposed owing to the sporadic nature of this process and the occurrence of individual rocks and boulders. Based on a data set of 314 European larch (Larix decidua Mill.) trees (i.e., 64 trees/ha), growing on an active rockfall slope, this study bridges this gap and proposes an optimized sampling strategy for the spatial and temporal reconstruction of rockfall activity. Using random extractions of trees, iterative mapping, and a stratified sampling strategy based on an arbitrary selection of trees, we investigate subsets of the full tree-ring data set to define optimal sample size and sampling design for the development of frequency maps of rockfall activity. Spatially, our results demonstrate that the sampling of only 6 representative trees per ha can be sufficient to yield a reasonable mapping of the spatial distribution of rockfall frequencies on a slope, especially if the oldest and most heavily affected individuals are included in the analysis. At the same time, however, sampling such a low number of trees risks causing significant errors especially if nonrepresentative trees are chosen for analysis. An increased number of samples therefore improves the quality of the frequency maps in this case. Temporally, we demonstrate that at least 40 trees/ha are needed to obtain reliable rockfall chronologies. These results will facilitate the design of future studies, decrease the cost-benefit ratio of dendrogeomorphic studies and thus will permit production of reliable reconstructions with reasonable temporal efforts.

7. Sample size estimation and sampling techniques for selecting a representative sample

OpenAIRE

Aamir Omair

2014-01-01

Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect ...

8. Sample size estimation and sampling techniques for selecting a representative sample

Directory of Open Access Journals (Sweden)

Aamir Omair

2014-01-01

Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

9. Sample size of the reference sample in a case-augmented study.

Science.gov (United States)

Ghosh, Palash; Dewanji, Anup

2017-05-01

The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

10. IBAR: Interacting boson model calculations for large system sizes

Science.gov (United States)

Casperson, R. J.

2012-04-01

Scaling the system size of the interacting boson model-1 (IBM-1) into the realm of hundreds of bosons has many interesting applications in the field of nuclear structure, most notably quantum phase transitions in nuclei. We introduce IBAR, a new software package for calculating the eigenvalues and eigenvectors of the IBM-1 Hamiltonian, for large numbers of bosons. Energies and wavefunctions of the nuclear states, as well as transition strengths between them, are calculated using these values. Numerical errors in the recursive calculation of reduced matrix elements of the d-boson creation operator are reduced by using an arbitrary precision mathematical library. This software has been tested for up to 1000 bosons using comparisons to analytic expressions. Comparisons have also been made to the code PHINT for smaller system sizes. Catalogue identifier: AELI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 28 734 No. of bytes in distributed program, including test data, etc.: 4 104 467 Distribution format: tar.gz Programming language: C++ Computer: Any computer system with a C++ compiler Operating system: Tested under Linux RAM: 150 MB for 1000 boson calculations with angular momenta of up to L=4 Classification: 17.18, 17.20 External routines: ARPACK (http://www.caam.rice.edu/software/ARPACK/) Nature of problem: Construction and diagonalization of large Hamiltonian matrices, using reduced matrix elements of the d-boson creation operator. Solution method: Reduced matrix elements of the d-boson creation operator have been stored in data files at machine precision, after being recursively calculated with higher than machine precision. The Hamiltonian matrix is calculated and diagonalized, and the requested transition strengths are calculated

11. Estimation of individual reference intervals in small sample sizes

DEFF Research Database (Denmark)

Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

2007-01-01

In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...

12. Calculation of size and location of autologous ipsilateral rotating keratoplasty.

Science.gov (United States)

Jonas, J B; Panda-Jonas, S

1994-09-01

Autologous ipsilateral rotating keratoplasty is a special form of keratoplasty in which a nonprogressive opacification of the centre of the cornea is rotated towards the limbus and the clear peripheral cornea is rotated into the optical axis of the eye. This study was performed to find an equation for calculating the best size and location of the trephine for this special kind of keratoplasty. Geometrical calculations were used to derive the formula. We arrived at the following equation for the best diameter of the trephine: diameter(trephine) = 3/4 diameter(cornea)-1/2e(e = preoperative distance between corneal centre and nearest edge of opacity covering corneal centre). The postoperative diameter of the optical zone is: 2 x diameter(trephine)-diameter(cornea). A postoperative clear optical zone of half the corneal diameter is achieved if the opacity just touches but does not extend beyond the corneal centre. For a postoperative optical zone of at least 30% (40%) of the corneal diameter, the opacity is preoperatively not allowed to extend beyond the corneal centre for more than 20% (10%) of the corneal diameter. This equation can be used for calculating the optimum size and location of the trephine for autologous ipsilateral rotational keratoplasty. For that purpose it is advisable to take photographs preoperatively.

13. Randomized controlled trials 5: Determining the sample size and power for clinical trials and cohort studies.

Science.gov (United States)

Greene, Tom

2015-01-01

Performing well-powered randomized controlled trials is of fundamental importance in clinical research. The goal of sample size calculations is to assure that statistical power is acceptable while maintaining a small probability of a type I error. This chapter overviews the fundamentals of sample size calculation for standard types of outcomes for two-group studies. It considers (1) the problems of determining the size of the treatment effect that the studies will be designed to detect, (2) the modifications to sample size calculations to account for loss to follow-up and nonadherence, (3) the options when initial calculations indicate that the feasible sample size is insufficient to provide adequate power, and (4) the implication of using multiple primary endpoints. Sample size estimates for longitudinal cohort studies must take account of confounding by baseline factors.

14. How many is enough? Determining optimal sample sizes for normative studies in pediatric neuropsychology.

Science.gov (United States)

Bridges, Ana J; Holler, Karen A

2007-11-01

The purpose of this investigation was to determine how confidence intervals (CIs) for pediatric neuropsychological norms vary as a function of sample size, and to determine optimal sample sizes for normative studies. First, the authors calculated 95% CIs for a set of published pediatric norms for four commonly used neuropsychological instruments. Second, 95% CIs were calculated for varying sample size (from n = 5 to n = 500). Results suggest that some pediatric norms have unacceptably wide CIs, and normative studies ought optimally to use 50 to 75 participants per cell. Smaller sample sizes may lead to overpathologizing results, while the cost of obtaining larger samples may not be justifiable.

15. Dose Rate Calculations for Rotary Mode Core Sampling Exhauster

CERN Document Server

Foust, D J

2000-01-01

This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.

16. Sample Size Growth with an Increasing Number of Comparisons

Directory of Open Access Journals (Sweden)

Chi-Hong Tseng

2012-01-01

Full Text Available An appropriate sample size is crucial for the success of many studies that involve a large number of comparisons. Sample size formulas for testing multiple hypotheses are provided in this paper. They can be used to determine the sample sizes required to provide adequate power while controlling familywise error rate or false discovery rate, to derive the growth rate of sample size with respect to an increasing number of comparisons or decrease in effect size, and to assess reliability of study designs. It is demonstrated that practical sample sizes can often be achieved even when adjustments for a large number of comparisons are made as in many genomewide studies.

17. A power analysis for fidelity measurement sample size determination.

Science.gov (United States)

Stokes, Lynne; Allor, Jill H

2016-03-01

The importance of assessing fidelity has been emphasized recently with increasingly sophisticated definitions, assessment procedures, and integration of fidelity data into analyses of outcomes. Fidelity is often measured through observation and coding of instructional sessions either live or by video. However, little guidance has been provided about how to determine the number of observations needed to precisely measure fidelity. We propose a practical method for determining a reasonable sample size for fidelity data collection when fidelity assessment requires observation. The proposed methodology is based on consideration of the power of tests of the treatment effect of outcome itself, as well as of the relationship between fidelity and outcome. It makes use of the methodology of probability sampling from a finite population, because the fidelity parameters of interest are estimated over a specific, limited time frame using a sample. For example, consider a fidelity measure defined as the number of minutes of exposure to a treatment curriculum during the 36 weeks of the study. In this case, the finite population is the 36 sessions, the parameter (number of minutes over the entire 36 sessions) is a total, and the sample is the observed sessions. Software for the sample size calculation is provided. (c) 2016 APA, all rights reserved).

18. SNS Sample Activation Calculator Flux Recommendations and Validation

Energy Technology Data Exchange (ETDEWEB)

McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)

2015-02-01

The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.

19. Optimal flexible sample size design with robust power.

Science.gov (United States)

Zhang, Lanju; Cui, Lu; Yang, Bo

2016-08-30

It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

20. Sample size determination in medical and surgical research.

Science.gov (United States)

Flikkema, Robert M; Toledo-Pereyra, Luis H

2012-02-01

One of the most critical yet frequently misunderstood principles of research is sample size determination. Obtaining an inadequate sample is a serious problem that can invalidate an entire study. Without an extensive background in statistics, the seemingly simple question of selecting a sample size can become quite a daunting task. This article aims to give a researcher with no background in statistics the basic tools needed for sample size determination. After reading this article, the researcher will be aware of all the factors involved in a power analysis and will be able to work more effectively with the statistician when determining sample size. This work also reviews the power of a statistical hypothesis, as well as how to estimate the effect size of a research study. These are the two key components of sample size determination. Several examples will be considered throughout the text.

1. A review of software for sample size determination.

Science.gov (United States)

Dattalo, Patrick

2009-09-01

The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities.

2. Determination of the optimal sample size for a clinical trial accounting for the population size

Science.gov (United States)

Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

2016-01-01

The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision‐theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two‐arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. PMID:27184938

3. Sample size calculations for 3-level cluster randomized trials

NARCIS (Netherlands)

Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

2008-01-01

BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

4. Sample size calculations for 3-level cluster randomized trials

NARCIS (Netherlands)

Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

2008-01-01

Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

5. Estimating population size with correlated sampling unit estimates

Science.gov (United States)

David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

2003-01-01

Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on markârecapture or distance sampling methods occur...

6. Sample size computation for association studies using case–parents ...

sample size for case–control association studies is discussed. Materials and methods. Parameter settings. We consider a candidate locus with two alleles A and a where. A is putatively associated with the disease status (increasing. Keywords. sample size; association tests; genotype relative risk; power; autism. Journal of ...

7. Understanding Power and Rules of Thumb for Determining Sample Sizes

OpenAIRE

Betsy L. Morgan; Carmen R. Wilson Van Voorhis

2007-01-01

This article addresses the definition of power and its relationship to Type I and Type II errors. We discuss the relationship of sample size and power. Finally, we offer statistical rules of thumb guiding the selection of sample sizes large enough for sufficient power to detecting differences, associations, chi-square, and factor analyses.

8. Understanding Power and Rules of Thumb for Determining Sample Sizes

Directory of Open Access Journals (Sweden)

Betsy L. Morgan

2007-09-01

Full Text Available This article addresses the definition of power and its relationship to Type I and Type II errors. We discuss the relationship of sample size and power. Finally, we offer statistical rules of thumb guiding the selection of sample sizes large enough for sufficient power to detecting differences, associations, chi-square, and factor analyses.

9. Considerations in determining sample size for pilot studies.

Science.gov (United States)

Hertzog, Melody A

2008-04-01

There is little published guidance concerning how large a pilot study should be. General guidelines, for example using 10% of the sample required for a full study, may be inadequate for aims such as assessment of the adequacy of instrumentation or providing statistical estimates for a larger study. This article illustrates how confidence intervals constructed around a desired or anticipated value can help determine the sample size needed. Samples ranging in size from 10 to 40 per group are evaluated for their adequacy in providing estimates precise enough to meet a variety of possible aims. General sample size guidelines by type of aim are offered.

10. Determining the sample size required for a community radon survey.

Science.gov (United States)

Chen, Jing; Tracy, Bliss L; Zielinski, Jan M; Moir, Deborah

2008-04-01

Radon measurements in homes and other buildings have been included in various community health surveys often dealing with only a few hundred randomly sampled households. It would be interesting to know whether such a small sample size can adequately represent the radon distribution in a large community. An analysis of radon measurement data obtained from the Winnipeg case-control study with randomly sampled subsets of different sizes has showed that a sample size of one to several hundred can serve the survey purpose well.

11. Sampling strategies for estimating brook trout effective population size

Science.gov (United States)

Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

2012-01-01

The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

12. Determination of the optimal sample size for a clinical trial accounting for the population size.

Science.gov (United States)

Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

2017-07-01

The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

13. Clinical trials with nested subgroups: Analysis, sample size determination and internal pilot studies.

Science.gov (United States)

Placzek, Marius; Friede, Tim

2017-01-01

The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.

14. Determining the effective sample size of a parametric prior.

Science.gov (United States)

Morita, Satoshi; Thall, Peter F; Müller, Peter

2008-06-01

We present a definition for the effective sample size of a parametric prior distribution in a Bayesian model, and propose methods for computing the effective sample size in a variety of settings. Our approach first constructs a prior chosen to be vague in a suitable sense, and updates this prior to obtain a sequence of posteriors corresponding to each of a range of sample sizes. We then compute a distance between each posterior and the parametric prior, defined in terms of the curvature of the logarithm of each distribution, and the posterior minimizing the distance defines the effective sample size of the prior. For cases where the distance cannot be computed analytically, we provide a numerical approximation based on Monte Carlo simulation. We provide general guidelines for application, illustrate the method in several standard cases where the answer seems obvious, and then apply it to some nonstandard settings.

15. Calculation of computer-generated hologram (CGH) from 3D object of arbitrary size and viewing angle

Science.gov (United States)

Xu, Liyao; Chang, Chenliang; Feng, Shaotong; Yuan, Caojin; Nie, Shouping

2017-11-01

We propose a method to calculate computer-generated hologram (CGH) from 3D object of arbitrary size and viewing angle. The spectrum relation between a 3D voxel object and the CGH wavefront is established. The CGH is generated via diffraction calculation from 3D voxel object based on the development of a scaled Fresnel diffraction algorithm. This CGH calculation method overcomes the limitations on sampling imposed by conventional Fourier transform based algorithm. The calculation and reconstruction of 3D object with arbitrary size and viewing angle is achieved. Both of simulation and optical experiments proves the validation of the proposed method.

16. Effects of Mesh Size on Sieved Samples of Corophium volutator

Science.gov (United States)

Crewe, Tara L.; Hamilton, Diana J.; Diamond, Antony W.

2001-08-01

Corophium volutator (Pallas), gammaridean amphipods found on intertidal mudflats, are frequently collected in mud samples sieved on mesh screens. However, mesh sizes used vary greatly among studies, raising the possibility that sampling methods bias results. The effect of using different mesh sizes on the resulting size-frequency distributions of Corophium was tested by collecting Corophium from mud samples with 0·5 and 0·25 mm sieves. More than 90% of Corophium less than 2 mm long passed through the larger sieve. A significantly smaller, but still substantial, proportion of 2-2·9 mm Corophium (30%) was also lost. Larger size classes were unaffected by mesh size. Mesh size significantly changed the observed size-frequency distribution of Corophium, and effects varied with sampling date. It is concluded that a 0·5 mm sieve is suitable for studies concentrating on adults, but to accurately estimate Corophium density and size-frequency distributions, a 0·25 mm sieve must be used.

17. Effects of sample size on the second magnetization peak in ...

8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

18. Planning Longitudinal Field Studies: Considerations in Determining Sample Size.

Science.gov (United States)

St.Pierre, Robert G.

1980-01-01

Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)

19. Investigating the impact of sample size on cognate detection

OpenAIRE

List, Johann-Mattis

2013-01-01

International audience; In historical linguistics, the problem of cognate detection is traditionally approached within the frame-work of the comparative method. Since the method is usually carried out manually, it is very flexible regarding its input parameters. However, while the number of languages and the selection of comparanda is not important for the successfull application of the method, the sample size of the comparanda is. In order to shed light on the impact of sample size on cognat...

20. Sample size requirements for training high-dimensional risk predictors.

Science.gov (United States)

Dobbin, Kevin K; Song, Xiao

2013-09-01

A common objective of biomarker studies is to develop a predictor of patient survival outcome. Determining the number of samples required to train a predictor from survival data is important for designing such studies. Existing sample size methods for training studies use parametric models for the high-dimensional data and cannot handle a right-censored dependent variable. We present a new training sample size method that is non-parametric with respect to the high-dimensional vectors, and is developed for a right-censored response. The method can be applied to any prediction algorithm that satisfies a set of conditions. The sample size is chosen so that the expected performance of the predictor is within a user-defined tolerance of optimal. The central method is based on a pilot dataset. To quantify uncertainty, a method to construct a confidence interval for the tolerance is developed. Adequacy of the size of the pilot dataset is discussed. An alternative model-based version of our method for estimating the tolerance when no adequate pilot dataset is available is presented. The model-based method requires a covariance matrix be specified, but we show that the identity covariance matrix provides adequate sample size when the user specifies three key quantities. Application of the sample size method to two microarray datasets is discussed.

DEFF Research Database (Denmark)

Bourdakis, Eleftherios; Kazanci, Ongun Berk; Olesen, Bjarne W.

2015-01-01

% of the maximum cooling load. It was concluded that all tested systems were able to provide an acceptable thermal environment even when the 50% of the maximum cooling load was used. From all the simulated systems the one that performed the best under both control principles was the ESCS ceiling system. Finally......The aim of this study was, by using a building simulation software, to prove that a radiant cooling system should not be sized based on the maximum cooling load but at a lower value. For that reason six radiant cooling models were simulated with two control principles using 100%, 70% and 50...... it was proved that ventilation systems should be sized based on the maximum cooling load....

2. Sample size matters: investigating the effect of sample size on a logistic regression debris flow susceptibility model

Science.gov (United States)

Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

2013-06-01

Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial datasets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In view of these results, we

3. Sample size matters: investigating the effect of sample size on a logistic regression susceptibility model for debris flows

Science.gov (United States)

Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

2014-02-01

Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable and reproducible results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and they approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial data sets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In

4. Sample sizes to control error estimates in determining soil bulk density in California forest soils

Science.gov (United States)

Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

2016-01-01

Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

5. CT dose survey in adults: what sample size for what precision?

Energy Technology Data Exchange (ETDEWEB)

Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

2017-01-15

To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

6. Size variation in samples of fossil and recent murid teeth

NARCIS (Netherlands)

Freudenthal, M.; Martín Suárez, E.

1990-01-01

The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.

7. Sample Size Requirements for Traditional and Regression-Based Norms.

Science.gov (United States)

Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

2016-04-01

Test norms enable determining the position of an individual test taker in the group. The most frequently used approach to obtain test norms is traditional norming. Regression-based norming may be more efficient than traditional norming and is rapidly growing in popularity, but little is known about its technical properties. A simulation study was conducted to compare the sample size requirements for traditional and regression-based norming by examining the 95% interpercentile ranges for percentile estimates as a function of sample size, norming method, size of covariate effects on the test score, test length, and number of answer categories in an item. Provided the assumptions of the linear regression model hold in the data, for a subdivision of the total group into eight equal-size subgroups, we found that regression-based norming requires samples 2.5 to 5.5 times smaller than traditional norming. Sample size requirements are presented for each norming method, test length, and number of answer categories. We emphasize that additional research is needed to establish sample size requirements when the assumptions of the linear regression model are violated. © The Author(s) 2015.

OpenAIRE

Bourdakis, Eleftherios; Kazanci, Ongun B.; Olesen, Bjarne W.

2015-01-01

The aim of this study was, by using a building simulation software, to prove that a radiant cooling system should not be sized based on the maximum cooling load but at a lower value. For that reason six radiant cooling models were simulated with two control principles using 100%, 70% and 50% of the maximum cooling load. It was concluded that all tested systems were able to provide an acceptable thermal environment even when the 50% of the maximum cooling load was used. From all the simulated ...

9. An Update on Using the Range to Estimate σ When Determining Sample Sizes.

Science.gov (United States)

Rhiel, George Steven; Markowski, Edward

2017-04-01

In this research, we develop a strategy for using a range estimator of σ when determining a sample size for estimating a mean. Previous research by Rhiel is extended to provide dn values for use in calculating a range estimate of σ when working with sampling frames up to size 1,000,000. This allows the use of the range estimator of σ with "big data." A strategy is presented for using the range estimator of σ for determining sample sizes based on the dn values developed in this study.

10. Mini-batch stochastic gradient descent with dynamic sample sizes

OpenAIRE

Metel, Michael R.

2017-01-01

We focus on solving constrained convex optimization problems using mini-batch stochastic gradient descent. Dynamic sample size rules are presented which ensure a descent direction with high probability. Empirical results from two applications show superior convergence compared to fixed sample implementations.

11. Element enrichment factor calculation using grain-size distribution and functional data regression.

Science.gov (United States)

Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

2015-01-01

In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

12. Determining sample size and a passing criterion for respirator fit-test panels.

Science.gov (United States)

Landsittel, D; Zhuang, Z; Newcomb, W; Berry Ann, R

2014-01-01

Few studies have proposed methods for sample size determination and specification of passing criterion (e.g., number needed to pass from a given size panel) for respirator fit-tests. One approach is to account for between- and within- subject variability, and thus take full advantage of the multiple donning measurements within subject, using a random effects model. The corresponding sample size calculation, however, may be difficult to implement in practice, as it depends on the model-specific and test panel-specific variance estimates, and thus does not yield a single sample size or specific cutoff for number needed to pass. A simple binomial approach is therefore proposed to simultaneously determine both the required sample size and the optimal cutoff for the number of subjects needed to achieve a passing result. The method essentially conducts a global search of the type I and type II errors under different null and alternative hypotheses, across the range of possible sample sizes, to find the lowest sample size which yields at least one cutoff satisfying, or approximately satisfying all pre-determined limits for the different error rates. Benchmark testing of 98 respirators (conducted by the National Institute for Occupational Safety and Health) is used to illustrate the binomial approach and show how sample size estimates from the random effects model can vary substantially depending on estimated variance components. For the binomial approach, probability calculations show that a sample size of 35 to 40 yields acceptable error rates under different null and alternative hypotheses. For the random effects model, the required sample sizes are generally smaller, but can vary substantially based on the estimate variance components. Overall, despite some limitations, the binomial approach represents a highly practical approach with reasonable statistical properties.

13. Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty.

Science.gov (United States)

Anderson, Samantha F; Kelley, Ken; Maxwell, Scott E

2017-11-01

The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.

14. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

Science.gov (United States)

Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

2017-10-03

To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H0 : ES = 0 versus alternative hypotheses H1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

15. Análise do emprego do cálculo amostral e do erro do método em pesquisas científicas publicadas na literatura ortodôntica nacional e internacional Analysis of the use of sample size calculation and error of method in researches published in Brazilian and international orthodontic journals

Directory of Open Access Journals (Sweden)

David Normando

2011-12-01

16. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

Directory of Open Access Journals (Sweden)

Bruno Giacomini Sari

2017-09-01

Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

17. Sample size for collecting germplasms–a polyploid model with ...

Numerous expressions/results developed for germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to ...

18. Sample size for collecting germplasms – a polyploid model with ...

Unknown

germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate.

19. Research Note Pilot survey to assess sample size for herbaceous ...

African Journals Online (AJOL)

A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

20. Determining sample size for assessing species composition in ...

African Journals Online (AJOL)

Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

1. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance

OpenAIRE

Timothy M Morgan; Case, L. Douglas

2013-01-01

In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time.

2. Sample Size Determinations for the Two Rater Kappa Statistic.

Science.gov (United States)

Flack, Virginia F.; And Others

1988-01-01

A method is presented for determining sample size that will achieve a pre-specified bound on confidence interval width for the interrater agreement measure "kappa." The same results can be used when a pre-specified power is desired for testing hypotheses about the value of kappa. (Author/SLD)

3. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

Science.gov (United States)

Duncanson, L.; Rourke, O.; Dubayah, R.

2015-11-01

Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation.

4. Sample size determination for logistic regression on a logit-normal distribution.

Science.gov (United States)

Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

2017-06-01

Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

5. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

Directory of Open Access Journals (Sweden)

Wang Jelai

2006-02-01

Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

6. GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS

Energy Technology Data Exchange (ETDEWEB)

Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.

2013-11-12

This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.

7. Aerosol Sampling Bias from Differential Electrostatic Charge and Particle Size

Science.gov (United States)

Jayjock, Michael Anthony

Lack of reliable epidemiological data on long term health effects of aerosols is due in part to inadequacy of sampling procedures and the attendant doubt regarding the validity of the concentrations measured. Differential particle size has been widely accepted and studied as a major potential biasing effect in the sampling of such aerosols. However, relatively little has been done to study the effect of electrostatic particle charge on aerosol sampling. The objective of this research was to investigate the possible biasing effects of differential electrostatic charge, particle size and their interaction on the sampling accuracy of standard aerosol measuring methodologies. Field studies were first conducted to determine the levels and variability of aerosol particle size and charge at two manufacturing facilities making acrylic powder. The field work showed that the particle mass median aerodynamic diameter (MMAD) varied by almost an order of magnitude (4-34 microns) while the aerosol surface charge was relatively stable (0.6-0.9 micro coulombs/m('2)). The second part of this work was a series of laboratory experiments in which aerosol charge and MMAD were manipulated in a 2('n) factorial design with the percentage of sampling bias for various standard methodologies as the dependent variable. The experiments used the same friable acrylic powder studied in the field work plus two size populations of ground quartz as a nonfriable control. Despite some ill conditioning of the independent variables due to experimental difficulties, statistical analysis has shown aerosol charge (at levels comparable to those measured in workroom air) is capable of having a significant biasing effect. Physical models consistent with the sampling data indicate that the level and bipolarity of the aerosol charge are determining factors in the extent and direction of the bias.

8. Effects of sample size on KERNEL home range estimates

Science.gov (United States)

Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

1999-01-01

Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

9. 40 CFR 91.419 - Raw emission sampling calculations.

Science.gov (United States)

2010-07-01

... following equations are used to determine the weighted emission values for the test engine: ER04oc96.015...) Pi = Average power measured during mode i, , calculated according to the formula given in § 91.423(b....410(a) Pi = Average power measured during mode i, , calculated according to the formula given in § 91...

10. Optimization of finite-size errors in finite-temperature calculations of unordered phases.

Science.gov (United States)

Iyer, Deepak; Srednicki, Mark; Rigol, Marcos

2015-06-01

It is common knowledge that the microcanonical, canonical, and grand-canonical ensembles are equivalent in thermodynamically large systems. Here, we study finite-size effects in the latter two ensembles. We show that contrary to naive expectations, finite-size errors are exponentially small in grand canonical ensemble calculations of translationally invariant systems in unordered phases at finite temperature. Open boundary conditions and canonical ensemble calculations suffer from finite-size errors that are only polynomially small in the system size. We further show that finite-size effects are generally smallest in numerical linked cluster expansions. Our conclusions are supported by analytical and numerical analyses of classical and quantum systems.

11. A simulated Experiment for Sampling Soil Micriarthropods to Reduce Sample Size

OpenAIRE

Tamura, Hiroshi

1987-01-01

An experiment was conducted to examine a possibility of reducing the necessary sample size in a quantitative survey on soil microarthropods, using soybeans instead of animals. An artificially provided, intensely aggregated distribution pattern of soybeans was easily transformed to the random pattern by stirring the substrate, which is soil in a large cardboard box. This enabled the necessary sample size to be greatly reduced without sacrificing the statistical reliability. A new practical met...

12. Sample size determination for longitudinal designs with binary response.

Science.gov (United States)

Kapur, Kush; Bhaumik, Runa; Tang, X Charlene; Hur, Kwan; Reda, Domenic J; Bhaumik, Dulal K

2014-09-28

In this article, we develop appropriate statistical methods for determining the required sample size while comparing the efficacy of an intervention to a control with repeated binary response outcomes. Our proposed methodology incorporates the complexity of the hierarchical nature of underlying designs and provides solutions when varying attrition rates are present over time. We explore how the between-subject variability and attrition rates jointly influence the computation of sample size formula. Our procedure also shows how efficient estimation methods play a crucial role in power analysis. A practical guideline is provided when information regarding individual variance component is unavailable. The validity of our methods is established by extensive simulation studies. Results are illustrated with the help of two randomized clinical trials in the areas of contraception and insomnia. Copyright © 2014 John Wiley & Sons, Ltd.

13. Effects of sample size on the second magnetization peak in ...

*E-mail: yeshurun@mail.biu.ac.il. Abstract. Effects of sample size on the second magnetization peak (SMP) in. Bi2Sr2CaCuO8+δ crystals are ... a termination of the measured transition line at Tl, typically 17–20 K (see figure 1). The obscuring and eventual disappearance of the SMP with decreasing tempera- tures has been ...

14. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

OpenAIRE

Duncanson, L.; Rourke, O.; Dubayah, R.

2015-01-01

Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height a...

15. Simple and multiple linear regression: sample size considerations.

Science.gov (United States)

Hanley, James A

2016-11-01

The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright Â© 2016 Elsevier Inc. All rights reserved.

16. New shooting algorithms for transition path sampling: centering moves and varied-perturbation sizes for improved sampling.

Science.gov (United States)

Rowley, Christopher N; Woo, Tom K

2009-12-21

Transition path sampling has been established as a powerful tool for studying the dynamics of rare events. The trajectory generation moves of this Monte Carlo procedure, shooting moves and shifting modes, were developed primarily for rate constant calculations, although this method has been more extensively used to study the dynamics of reactive processes. We have devised and implemented three alternative trajectory generation moves for use with transition path sampling. The centering-shooting move incorporates a shifting move into a shooting move, which centers the transition period in the middle of the trajectory, eliminating the need for shifting moves and generating an ensemble where the transition event consistently occurs near the middle of the trajectory. We have also developed varied-perturbation size shooting moves, wherein smaller perturbations are made if the shooting point is far from the transition event. The trajectories generated using these moves decorrelate significantly faster than with conventional, constant sized perturbations. This results in an increase in the statistical efficiency by a factor of 2.5-5 when compared to the conventional shooting algorithm. On the other hand, the new algorithm breaks detailed balance and introduces a small bias in the transition time distribution. We have developed a modification of this varied-perturbation size shooting algorithm that preserves detailed balance, albeit at the cost of decreased sampling efficiency. Both varied-perturbation size shooting algorithms are found to have improved sampling efficiency when compared to the original constant perturbation size shooting algorithm.

17. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

Directory of Open Access Journals (Sweden)

Malhotra Rajeev

2010-01-01

Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

18. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

Science.gov (United States)

Luh, Wei-Ming; Guo, Jiin-Huarng

2016-01-01

This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

19. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

Directory of Open Access Journals (Sweden)

Stefanović Milena

2013-01-01

Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

20. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

Energy Technology Data Exchange (ETDEWEB)

Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

2013-04-27

This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

1. A behavioural Bayes approach to the determination of sample size for clinical trials considering efficacy and safety: imbalanced sample size in treatment groups.

Science.gov (United States)

Kikuchi, Takashi; Gittins, John

2011-08-01

The behavioural Bayes approach to sample size determination for clinical trials assumes that the number of subsequent patients switching to a new drug from the current drug depends on the strength of the evidence for efficacy and safety that was observed in the clinical trials. The optimal sample size is the one which maximises the expected net benefit of the trial. The approach has been developed in a series of papers by Pezeshk and the present authors (Gittins JC, Pezeshk H. A behavioral Bayes method for determining the size of a clinical trial. Drug Information Journal 2000; 34: 355-63; Gittins JC, Pezeshk H. How Large should a clinical trial be? The Statistician 2000; 49(2): 177-87; Gittins JC, Pezeshk H. A decision theoretic approach to sample size determination in clinical trials. Journal of Biopharmaceutical Statistics 2002; 12(4): 535-51; Gittins JC, Pezeshk H. A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 2002; 36: 143-50; Kikuchi T, Pezeshk H, Gittins J. A Bayesian cost-benefit approach to the determination of sample size in clinical trials. Statistics in Medicine 2008; 27(1): 68-82; Kikuchi T, Gittins J. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety. Statistics in Medicine 2009; 28(18): 2293-306; Kikuchi T, Gittins J. A Bayesian procedure for cost-benefit evaluation of a new drug in multi-national clinical trials. Statistics in Medicine 2009 (Submitted)). The purpose of this article is to provide a rationale for experimental designs which allocate more patients to the new treatment than to the control group. The model uses a logistic weight function, including an interaction term linking efficacy and safety, which determines the number of patients choosing the new drug, and hence the resulting benefit. A Monte Carlo simulation is employed for the calculation. Having a larger group of patients on the new drug in general

2. MetSizeR: selecting the optimal sample size for metabolomic studies using an analysis based approach

OpenAIRE

Nyamundanda, Gift; Gormley, Isobel Claire; Fan, Yue; Gallagher, William M.; Brennan, Lorraine

2013-01-01

Background: Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. Results: In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experime...

3. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

Science.gov (United States)

2017-12-01

During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

4. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

Science.gov (United States)

Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

2016-09-01

In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

5. It's in the Sample: The Effects of Sample Size and Sample Diversity on the Breadth of Inductive Generalization

Science.gov (United States)

Lawson, Chris A.; Fisher, Anna V.

2011-01-01

Developmental studies have provided mixed evidence with regard to the question of whether children consider sample size and sample diversity in their inductive generalizations. Results from four experiments with 105 undergraduates, 105 school-age children (M = 7.2 years), and 105 preschoolers (M = 4.9 years) showed that preschoolers made a higher…

6. Calculated Grain Size-Dependent Vacancy Supersaturation and its Effect on Void Formation

DEFF Research Database (Denmark)

Singh, Bachu Narain; Foreman, A. J. E.

1974-01-01

In order to study the effect of grain size on void formation during high-energy electron irradiations, the steady-state point defect concentration and vacancy supersaturation profiles have been calculated for three-dimensional spherical grains up to three microns in size. In the calculations...... of vacancy supersaturation as a function of grain size, the effects of internal sink density and the dislocation preference for interstitial attraction have been included. The computations show that the level of vacancy supersaturation achieved in a grain decreases with decreasing grain size. The grain size...... dependence of the maximum vacancy supersaturation in the centre of the grains is found to be very similar to the grain size dependence of the maximum void number density and void volume swelling measured in the central regions of austenitic stainless steel grains. This agreement reinforces the interpretation...

7. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

Science.gov (United States)

Mütze, Tobias; Friede, Tim

2017-10-15

In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

8. Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.

Science.gov (United States)

Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald

2017-04-01

The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: 60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each

9. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

Science.gov (United States)

2010-01-01

..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

10. 40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations

Science.gov (United States)

2010-07-01

... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable to...

11. Sample size requirement in analytical studies for similarity assessment.

Science.gov (United States)

Chow, Shein-Chung; Song, Fuyu; Bai, He

2017-01-01

For the assessment of biosimilar products, the FDA recommends a stepwise approach for obtaining the totality-of-the-evidence for assessing biosimilarity between a proposed biosimilar product and its corresponding innovative biologic product. The stepwise approach starts with analytical studies for assessing similarity in critical quality attributes (CQAs), which are relevant to clinical outcomes at various stages of the manufacturing process. For CQAs that are the most relevant to clinical outcomes, the FDA requires an equivalence test be performed for similarity assessment based on an equivalence acceptance criterion (EAC) that is obtained using a single test value of some selected reference lots. In practice, we often have extremely imbalanced numbers of reference and test lots available for the establishment of EAC. In this case, to assist the sponsors, the FDA proposed an idea for determining the number of reference lots and the number of test lots required in order not to have imbalanced sample sizes when establishing EAC for the equivalence test based on extensive simulation studies. Along this line, this article not only provides statistical justification of Dong, Tsong, and Weng's proposal, but also proposes an alternative method for sample size requirement for the Tier 1 equivalence test.

12. Sample Size for Assessing Agreement between Two Methods of Measurement by Bland-Altman Method.

Science.gov (United States)

Lu, Meng-Jie; Zhong, Wei-Hua; Liu, Yu-Xiu; Miao, Hua-Zhang; Li, Yong-Chang; Ji, Mu-Huo

2016-11-01

The Bland-Altman method has been widely used for assessing agreement between two methods of measurement. However, it remains unsolved about sample size estimation. We propose a new method of sample size estimation for Bland-Altman agreement assessment. According to the Bland-Altman method, the conclusion on agreement is made based on the width of the confidence interval for LOAs (limits of agreement) in comparison to predefined clinical agreement limit. Under the theory of statistical inference, the formulae of sample size estimation are derived, which depended on the pre-determined level of α, β, the mean and the standard deviation of differences between two measurements, and the predefined limits. With this new method, the sample sizes are calculated under different parameter settings which occur frequently in method comparison studies, and Monte-Carlo simulation is used to obtain the corresponding powers. The results of Monte-Carlo simulation showed that the achieved powers could coincide with the pre-determined level of powers, thus validating the correctness of the method. The method of sample size estimation can be applied in the Bland-Altman method to assess agreement between two methods of measurement.

13. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

OpenAIRE

Bruno Giacomini Sari; Alessandro Dal’Col Lúcio; Cinthya Souza Santana; Dionatan Ketzer Krysczun; André Luís Tischler; Lucas Drebes

2017-01-01

ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix b...

14. Sample size for regression analyses of theory of planned behaviour studies: case of prescribing in general practice.

Science.gov (United States)

Rashidian, Arash; Miles, Jeremy; Russell, Daphne; Russell, Ian

2006-11-01

Interest has been growing in the use of the theory of planned behaviour (TBP) in health services research. The sample sizes range from less than 50 to more than 750 in published TPB studies without sample size calculations. We estimate the sample size for a multi-stage random survey of prescribing intention and actual prescribing for asthma in British general practice. To our knowledge, this is the first systematic attempt to determine sample size for a TPB survey. We use two different approaches: reported values of regression models' goodness-of-fit (the lambda method) and zero-order correlations (the variance inflation factor or VIF method). Intra-cluster correlation coefficient (ICC) is estimated and a socioeconomic variable is used for stratification. We perform sensitivity analysis to estimate the effects of our decisions on final sample size. The VIF method is more sensitive to the requirements of a TPB study. Given a correlation of .25 between intention and behaviour, and of .4 between intention and perceived behavioural control, the proposed sample size is 148. We estimate the ICC for asthma prescribing to be around 0.07. If 10 general practitioners were sampled per cluster, the sample size would be 242. It is feasible to perform sophisticated sample size calculations for a TPB study. The VIF is the appropriate method. Our approach can be used with adjustments in other settings and for other regression models.

15. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

Science.gov (United States)

van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

2017-12-04

Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

16. Optimal sample size determinations from an industry perspective based on the expected value of information.

Science.gov (United States)

Willan, Andrew R

2008-01-01

Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as type I and II errors. As an alternative, taking a societal perspective, and using the expected value of information based on Bayesian decision theory, a number of authors have recently shown how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of the trial and the value of the information gained from the results. Other authors have proposed Bayesian methods to determine sample sizes from an industry perspective. The purpose of this article is to propose a Bayesian approach to sample size calculations from an industry perspective that attempts to determine the sample size that maximizes expected profit. A model is proposed for expected total profit that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, discount rate, and the relationship between the results and the probability of regulatory approval. The expected value of information provided by trial data is related to the increase in expected profit from increasing the probability of regulatory approval. The methods are applied to an example, including an examination of robustness. The model is extended to consider market share as a function of observed treatment effect. The use of methods based on the expected value of information can provide, from an industry perspective, robust sample size solutions that maximize the difference between the expected cost of the trial and the expected value of information gained from the results. The method is only as good as the model for expected total profit. Although the model probably has all the right elements, it assumes that market share, per-patient profit, and incidence are insensitive to trial results. The method relies on the central limit theorem which assumes that the sample sizes involved ensure that the relevant test statistics

17. Sample size determination for a t test given a t value from a previous study: A FORTRAN 77 program.

Science.gov (United States)

Gillett, R

2001-11-01

When uncertain about the magnitude of an effect, researchers commonly substitute in the standard sample-size-determination formula an estimate of effect size derived from a previous experiment. A problem with this approach is that the traditional sample-size-determination formula was not designed to deal with the uncertainty inherent in an effect-size estimate. Consequently, estimate-substitution in the traditional sample-size-determination formula can lead to a substantial loss of power. A method of sample-size determination designed to handle uncertainty in effect-size estimates is described. The procedure uses the t value and sample size from a previous study, which might be a pilot study or a related study in the same area, to establish a distribution of probable effect sizes. The sample size to be employed in the new study is that which supplies an expected power of the desired amount over the distribution of probable effect sizes. A FORTRAN 77 program is presented that permits swift calculation of sample size for a variety of t tests, including independent t tests, related t tests, t tests of correlation coefficients, and t tests of multiple regression b coefficients.

18. 10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations

Science.gov (United States)

2010-01-01

... 10 Energy 3 2010-01-01 2010-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle is...

19. 46 CFR 280.11 - Example of calculation and sample report.

Science.gov (United States)

2010-10-01

... 46 Shipping 8 2010-10-01 2010-10-01 false Example of calculation and sample report. 280.11 Section... VESSELS AND OPERATORS LIMITATIONS ON THE AWARD AND PAYMENT OF OPERATING-DIFFERENTIAL SUBSIDY FOR LINER OPERATORS § 280.11 Example of calculation and sample report. (a) Example of calculation. The provisions of...

20. Determining optimal sample sizes for multi-stage randomized clinical trials using value of information methods.

Science.gov (United States)

Willan, Andrew; Kowgier, Matthew

2008-01-01

Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.

1. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

Science.gov (United States)

Allen, Carlton C.

2011-01-01

The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by

2. Cavern/Vault Disposal Concepts and Thermal Calculations for Direct Disposal of 37-PWR Size DPCs

Energy Technology Data Exchange (ETDEWEB)

Hardin, Ernest [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Clayton, Daniel James [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

2015-03-01

This report provides two sets of calculations not presented in previous reports on the technical feasibility of spent nuclear fuel (SNF) disposal directly in dual-purpose canisters (DPCs): 1) thermal calculations for reference disposal concepts using larger 37-PWR size DPC-based waste packages, and 2) analysis and thermal calculations for underground vault-type storage and eventual disposal of DPCs. The reader is referred to the earlier reports (Hardin et al. 2011, 2012, 2013; Hardin and Voegele 2013) for contextual information on DPC direct disposal alternatives.

3. Implications of sampling design and sample size for national carbon accounting systems.

Science.gov (United States)

Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

2011-11-08

Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

4. A comparison of different estimation methods for simulation-based sample size determination in longitudinal studies

Science.gov (United States)

Bahçecitapar, Melike Kaya

2017-07-01

Determining sample size necessary for correct results is a crucial step in the design of longitudinal studies. Simulation-based statistical power calculation is a flexible approach to determine number of subjects and repeated measures of longitudinal studies especially in complex design. Several papers have provided sample size/statistical power calculations for longitudinal studies incorporating data analysis by linear mixed effects models (LMMs). In this study, different estimation methods (methods based on maximum likelihood (ML) and restricted ML) with different iterative algorithms (quasi-Newton and ridge-stabilized Newton-Raphson) in fitting LMMs to generated longitudinal data for simulation-based power calculation are compared. This study examines statistical power of F-test statistics for parameter representing difference in responses over time from two treatment groups in the LMM with a longitudinal covariate. The most common procedures in SAS, such as PROC GLIMMIX using quasi-Newton algorithm and PROC MIXED using ridge-stabilized algorithm are used for analyzing generated longitudinal data in simulation. It is seen that both procedures present similar results. Moreover, it is found that the magnitude of the parameter of interest in the model for simulations affect statistical power calculations in both procedures substantially.

5. Threshold-dependent sample sizes for selenium assessment with stream fish tissue.

Science.gov (United States)

Hitt, Nathaniel P; Smith, David R

2015-01-01

Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α=0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites

6. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

Science.gov (United States)

Hitt, Nathaniel P.; Smith, David R.

2015-01-01

Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

7. Sample Size of One: Operational Qualitative Analysis in the Classroom

Directory of Open Access Journals (Sweden)

John Hoven

2015-10-01

Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.

8. Sample size determinations for Welch's test in one-way heteroscedastic ANOVA.

Science.gov (United States)

Jan, Show-Li; Shieh, Gwowen

2014-02-01

For one-way fixed effects ANOVA, it is well known that the conventional F test of the equality of means is not robust to unequal variances, and numerous methods have been proposed for dealing with heteroscedasticity. On the basis of extensive empirical evidence of Type I error control and power performance, Welch's procedure is frequently recommended as the major alternative to the ANOVA F test under variance heterogeneity. To enhance its practical usefulness, this paper considers an important aspect of Welch's method in determining the sample size necessary to achieve a given power. Simulation studies are conducted to compare two approximate power functions of Welch's test for their accuracy in sample size calculations over a wide variety of model configurations with heteroscedastic structures. The numerical investigations show that Levy's (1978a) approach is clearly more accurate than the formula of Luh and Guo (2011) for the range of model specifications considered here. Accordingly, computer programs are provided to implement the technique recommended by Levy for power calculation and sample size determination within the context of the one-way heteroscedastic ANOVA model. © 2013 The British Psychological Society.

9. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

DEFF Research Database (Denmark)

Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

2009-01-01

/CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...

10. Air and smear sample calculational tool for Fluor Hanford Radiological control

Energy Technology Data Exchange (ETDEWEB)

BAUMANN, B.L.

2003-09-24

A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, Analyzing Air and smear Samples. This document reports on the design and testing of the calculation tool.

11. 40 CFR 761.243 - Standard wipe sample method and size.

Science.gov (United States)

2010-07-01

... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Standard wipe sample method and size... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas...

12. 40 CFR Appendix III to Part 600 - Sample Fuel Economy Label Calculation

Science.gov (United States)

2010-07-01

... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Label Calculation...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. III Appendix III to Part 600—Sample Fuel Economy Label Calculation Suppose that a manufacturer called Mizer...

13. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

Science.gov (United States)

Guo, Jiin-Huarng; Luh, Wei-Ming

2009-05-01

When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

14. Sample Size Considerations of Prediction-Validation Methods in High-Dimensional Data for Survival Outcomes

Science.gov (United States)

Pang, Herbert; Jung, Sin-Ho

2013-01-01

A variety of prediction methods are used to relate high-dimensional genome data with a clinical outcome using a prediction model. Once a prediction model is developed from a data set, it should be validated using a resampling method or an independent data set. Although the existing prediction methods have been intensively evaluated by many investigators, there has not been a comprehensive study investigating the performance of the validation methods, especially with a survival clinical outcome. Understanding the properties of the various validation methods can allow researchers to perform more powerful validations while controlling for type I error. In addition, sample size calculation strategy based on these validation methods is lacking. We conduct extensive simulations to examine the statistical properties of these validation strategies. In both simulations and a real data example, we have found that 10-fold cross-validation with permutation gave the best power while controlling type I error close to the nominal level. Based on this, we have also developed a sample size calculation method that will be used to design a validation study with a user-chosen combination of prediction. Microarray and genome-wide association studies data are used as illustrations. The power calculation method in this presentation can be used for the design of any biomedical studies involving high-dimensional data and survival outcomes. PMID:23471879

15. Sample size allocation for food item radiation monitoring and safety inspection.

Science.gov (United States)

Seto, Mayumi; Uriu, Koichiro

2015-03-01

The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

16. The effects of focused transducer geometry and sample size on the measurement of ultrasonic transmission properties

Science.gov (United States)

Atkins, T. J.; Humphrey, V. F.; Duck, F. A.; Tooley, M. A.

2011-02-01

The response of two coaxially aligned weakly focused ultrasonic transducers, typical of those employed for measuring the attenuation of small samples using the immersion method, has been investigated. The effects of the sample size on transmission measurements have been analyzed by integrating the sound pressure distribution functions of the radiator and receiver over different limits to determine the size of the region that contributes to the system response. The results enable the errors introduced into measurements of attenuation to be estimated as a function of sample size. A theoretical expression has been used to examine how the transducer separation affects the receiver output. The calculations are compared with an experimental study of the axial response of three unpaired transducers in water. The separation of each transducer pair giving the maximum response was determined, and compared with the field characteristics of the individual transducers. The optimum transducer separation, for accurate estimation of sample properties, was found to fall between the sum of the focal distances and the sum of the geometric focal lengths as this reduced diffraction errors.

17. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

Energy Technology Data Exchange (ETDEWEB)

Coles, Henry C.; Qin, Yong; Price, Phillip N.

2014-11-01

This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

18. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

Science.gov (United States)

Shieh, Gwowen

2013-01-01

The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

19. SAMPLE SIZE DETERMINATION IN CLINICAL TRIALS BASED ON APPROXIMATION OF VARIANCE ESTIMATED FROM LIMITED PRIMARY OR PILOT STUDIES

Directory of Open Access Journals (Sweden)

B SOLEYMANI

2001-06-01

Full Text Available In many casses the estimation of variance which is used to determine sample size in clinical trials, derives from limited primary or pilot studies in which number of samples is small. since in such casses the estimation of variance may be much far from the real variance, the size of samples is suspected to be less or more than what is really needed. In this article an attempt has been made to give a solution to this problem. in the case of normal distribution. Based on distribution of (n-1 S2/?2 which is chi-square for normal variables, an appropriate estimation of variance is determined an used to calculate sample size. Also, total probability to ensure specific precision and power has been achived. In method presented here, The probability for getting desired precision and power is more than that of usual method, but results of two methods get closer when sample size increases in primary studies.

20. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

Science.gov (United States)

Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

2017-12-01

Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

1. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

Science.gov (United States)

Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

2009-10-21

Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

2. Accelerating potential of mean force calculations for lipid membrane permeation: System size, reaction coordinate, solute-solute distance, and cutoffs

Science.gov (United States)

Nitschke, Naomi; Atkovska, Kalina; Hub, Jochen S.

2016-09-01

Molecular dynamics simulations are capable of predicting the permeability of lipid membranes for drug-like solutes, but the calculations have remained prohibitively expensive for high-throughput studies. Here, we analyze simple measures for accelerating potential of mean force (PMF) calculations of membrane permeation, namely, (i) using smaller simulation systems, (ii) simulating multiple solutes per system, and (iii) using shorter cutoffs for the Lennard-Jones interactions. We find that PMFs for membrane permeation are remarkably robust against alterations of such parameters, suggesting that accurate PMF calculations are possible at strongly reduced computational cost. In addition, we evaluated the influence of the definition of the membrane center of mass (COM), used to define the transmembrane reaction coordinate. Membrane-COM definitions based on all lipid atoms lead to artifacts due to undulations and, consequently, to PMFs dependent on membrane size. In contrast, COM definitions based on a cylinder around the solute lead to size-independent PMFs, down to systems of only 16 lipids per monolayer. In summary, compared to popular setups that simulate a single solute in a membrane of 128 lipids with a Lennard-Jones cutoff of 1.2 nm, the measures applied here yield a speedup in sampling by factor of ˜40, without reducing the accuracy of the calculated PMF.

3. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

Energy Technology Data Exchange (ETDEWEB)

Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

2017-07-15

The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

4. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

Science.gov (United States)

Algina, James; Olejnik, Stephen

2000-01-01

Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

5. (Sample) size matters! An examination of sample size from the SPRINT trial study to prospectively evaluate reamed intramedullary nails in patients with tibial fractures

NARCIS (Netherlands)

2013-01-01

Inadequate sample size and power in randomized trials can result in misleading findings. This study demonstrates the effect of sample size in a large clinical trial by evaluating the results of the Study to Prospectively evaluate Reamed Intramedullary Nails in Patients with Tibial fractures (SPRINT)

6. Size-fractionated measurement of coarse black carbon particles in deposition samples

Science.gov (United States)

Schultz, E.

In a 1-year field study, particle deposition flux was measured by transparent collection plates. Particle concentration was simultaneously measured with a cascade impactor. Microscopic evaluation of deposition samples provided the discrimination of translucent (mineral or biological) and black carbon particles, i.e. soot agglomerates, fly-ash cenospheres and rubber fragments in the size range from 3 to 50 μm. The deposition samples were collected in two different sampling devices. A wind- and rain-shielded measurement was achieved in the Sigma-2 device. Dry deposition data from this device were used to calculate mass concentrations of the translucent and the black particle fraction separately, approximating particle deposition velocity by Stokes' settling velocity. In mass calculations an error up to 20% has to be considered due to assumed spherical shape and unit density for all particles. Within the limitations of these assumptions, deposition velocities of the distinguished coarse particles were calculated. The results for total particulate matter in this range are in good agreement with those from impactor measurement. The coarse black carbon fraction shows a reduced deposition velocity in comparison with translucent particles. The deviation depends on precipitation amount. Further measurements and structural investigations of black carbon particles are in preparation to verify these results.

7. Forward flux sampling calculation of homogeneous nucleation rates from aqueous NaCl solutions

Science.gov (United States)

Jiang, Hao; Haji-Akbari, Amir; Debenedetti, Pablo G.; Panagiotopoulos, Athanassios Z.

2018-01-01

We used molecular dynamics simulations and the path sampling technique known as forward flux sampling to study homogeneous nucleation of NaCl crystals from supersaturated aqueous solutions at 298 K and 1 bar. Nucleation rates were obtained for a range of salt concentrations for the Joung-Cheatham NaCl force field combined with the Extended Simple Point Charge (SPC/E) water model. The calculated nucleation rates are significantly lower than the available experimental measurements. The estimates for the nucleation rates in this work do not rely on classical nucleation theory, but the pathways observed in the simulations suggest that the nucleation process is better described by classical nucleation theory than an alternative interpretation based on Ostwald's step rule, in contrast to some prior simulations of related models. In addition to the size of NaCl nucleus, we find that the crystallinity of a nascent cluster plays an important role in the nucleation process. Nuclei with high crystallinity were found to have higher growth probability and longer lifetimes, possibly because they are less exposed to hydration water.

8. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

Science.gov (United States)

Beaujean, A. Alexander

2014-01-01

A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

9. Limitations of mRNA amplification from small-size cell samples

Directory of Open Access Journals (Sweden)

Myklebost Ola

2005-10-01

Full Text Available Abstract Background Global mRNA amplification has become a widely used approach to obtain gene expression profiles from limited material. An important concern is the reliable reflection of the starting material in the results obtained. This is especially important with extremely low quantities of input RNA where stochastic effects due to template dilution may be present. This aspect remains under-documented in the literature, as quantitative measures of data reliability are most often lacking. To address this issue, we examined the sensitivity levels of each transcript in 3 different cell sample sizes. ANOVA analysis was used to estimate the overall effects of reduced input RNA in our experimental design. In order to estimate the validity of decreasing sample sizes, we examined the sensitivity levels of each transcript by applying a novel model-based method, TransCount. Results From expression data, TransCount provided estimates of absolute transcript concentrations in each examined sample. The results from TransCount were used to calculate the Pearson correlation coefficient between transcript concentrations for different sample sizes. The correlations were clearly transcript copy number dependent. A critical level was observed where stochastic fluctuations became significant. The analysis allowed us to pinpoint the gene specific number of transcript templates that defined the limit of reliability with respect to number of cells from that particular source. In the sample amplifying from 1000 cells, transcripts expressed with at least 121 transcripts/cell were statistically reliable and for 250 cells, the limit was 1806 transcripts/cell. Above these thresholds, correlation between our data sets was at acceptable values for reliable interpretation. Conclusion These results imply that the reliability of any amplification experiment must be validated empirically to justify that any gene exists in sufficient quantity in the input material. This

10. Sample size and the probability of a successful trial.

Science.gov (United States)

Chuang-Stein, Christy

2006-01-01

This paper describes the distinction between the concept of statistical power and the probability of getting a successful trial. While one can choose a very high statistical power to detect a certain treatment effect, the high statistical power does not necessarily translate to a high success probability if the treatment effect to detect is based on the perceived ability of the drug candidate. The crucial factor hinges on our knowledge of the drug's ability to deliver the effect used to power the study. The paper discusses a framework to calculate the 'average success probability' and demonstrates how uncertainty about the treatment effect could affect the average success probability for a confirmatory trial. It complements an earlier work by O'Hagan et al. (Pharmaceutical Statistics 2005; 4:187-201) published in this journal. Computer codes to calculate the average success probability are included.

11. Analysis of AC loss in superconducting power devices calculated from short sample data

NARCIS (Netherlands)

Rabbers, J.J.; ten Haken, Bernard; ten Kate, Herman H.J.

2003-01-01

A method to calculate the AC loss of superconducting power devices from the measured AC loss of a short sample is developed. In coils and cables the magnetic field varies spatially. The position dependent field vector is calculated assuming a homogeneous current distribution. From this field profile

12. Air and smear sample calculational tool for Fluor Hanford Radiological control

Energy Technology Data Exchange (ETDEWEB)

BAUMANN, B.L.

2003-07-11

A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alpha and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample.

13. SIMPLE METHOD OF SIZE-SPECIFIC DOSE ESTIMATES CALCULATION FROM PATIENT WEIGHT ON COMPUTED TOMOGRAPHY.

Science.gov (United States)

Iriuchijima, Akiko; Fukushima, Yasuhiro; Nakajima, Takahito; Tsushima, Yoshito; Ogura, Akio

2017-07-28

The purpose of this study is to develop a new and simple methodology for calculating mean size-specific dose estimates (SSDE) over the entire scan range (mSSDE) from weight and volume CT dose index (CTDIvol). We retrospectively analyzed data from a dose index registry. Scan areas were divided into two regions: chest and abdomen-pelvis. The original mSSDE was calculated by a commercially available software. The conversion formulas for mSSDE were estimated from weight and CTDIvol (SSDEweight) in each region. SSDEweight were compared with the original mSSDE using Bland-Altman analysis. Root mean square differences were 1.4 mGy for chest and 1.5 mGy for abdomen-pelvis. Our method using formulae can calculate SSDEweight using weight and CTDIvol without a dedicated software, and can be used to calculate DRL to optimize CT exposure doses. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

14. Sample size reduction in groundwater surveys via sparse data assimilation

KAUST Repository

Hussain, Z.

2013-04-01

In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.

15. Sample size and power determination when limited preliminary information is available

Directory of Open Access Journals (Sweden)

Christine E. McLaren

2017-04-01

Full Text Available Abstract Background We describe a novel strategy for power and sample size determination developed for studies utilizing investigational technologies with limited available preliminary data, specifically of imaging biomarkers. We evaluated diffuse optical spectroscopic imaging (DOSI, an experimental noninvasive imaging technique that may be capable of assessing changes in mammographic density. Because there is significant evidence that tamoxifen treatment is more effective at reducing breast cancer risk when accompanied by a reduction of breast density, we designed a study to assess the changes from baseline in DOSI imaging biomarkers that may reflect fluctuations in breast density in premenopausal women receiving tamoxifen. Method While preliminary data demonstrate that DOSI is sensitive to mammographic density in women about to receive neoadjuvant chemotherapy for breast cancer, there is no information on DOSI in tamoxifen treatment. Since the relationship between magnetic resonance imaging (MRI and DOSI has been established in previous studies, we developed a statistical simulation approach utilizing information from an investigation of MRI assessment of breast density in 16 women before and after treatment with tamoxifen to estimate the changes in DOSI biomarkers due to tamoxifen. Results Three sets of 10,000 pairs of MRI breast density data with correlation coefficients of 0.5, 0.8 and 0.9 were simulated and generated and were used to simulate and generate a corresponding 5,000,000 pairs of DOSI values representing water, ctHHB, and lipid. Minimum sample sizes needed per group for specified clinically-relevant effect sizes were obtained. Conclusion The simulation techniques we describe can be applied in studies of other experimental technologies to obtain the important preliminary data to inform the power and sample size calculations.

16. Practical Approaches For Determination Of Sample Size In Paired Case-Control Studies

OpenAIRE

Demirel, Neslihan; Ozlem EGE ORUC; Gurler, Selma

2016-01-01

Objective: Cross-over design or paired case control studies that are using in clinical studies are the methods of design of experiments which requires dependent samples. The problem of sample size determination is generally difficult step of planning the statistical design. The aim of this study is to provide the researchers a practical approach for determining the sample size in paired control studies. Material and Methods: In this study, determination of sample size is mentioned in detail i...

17. Understanding the Role of Text Length, Sample Size and Vocabulary Size in Determining Text Coverage

Science.gov (United States)

Chujo, Kiyomi; Utiyama, Masao

2005-01-01

Although the use of "text coverage" to measure the intelligibility of reading materials is increasing in the field of vocabulary teaching and learning, to date there have been few studies which address the methodological variables that can affect reliable text coverage calculations. The objective of this paper is to investigate how differing…

18. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

Science.gov (United States)

Johnson, Kenneth L.; White, K. Preston, Jr.

2012-01-01

The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

19. Distance software: design and analysis of distance sampling surveys for estimating population size.

Science.gov (United States)

Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon Rb; Marques, Tiago A; Burnham, Kenneth P

2010-02-01

1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance.2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use.3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated.4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark-recapture distance sampling, which relaxes the assumption of certain detection at zero distance.5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap.6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software.7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods

20. Calculation of total runoff and sediment yield from aliquot sampling in rainfall experiments

Science.gov (United States)

Fister, Wolfgang; Tresch, Simon; Marzen, Miriam; Iserloh, Thomas

2017-04-01

The quality of rainfall simulations depends on many different aspects, for example simulator quality, operator experience, water quality, and a lot more. One important aspect, which is often not very well described in literature, is the calculation of total runoff and sediment yield from aliquot sampling of discharged material. More specifically, neither the sampling interval nor the interpolation method is clearly specified in many papers on rainfall simulations. As a result, an independent quality control of the published data is often impossible. Obviously, it would be best to collect everything that comes off the plot in the shortest possible interval. However, high rainfall amounts often coincide with limited transport and analysis capacities. It is, therefore, in most cases necessary to find a good compromise between sampling frequency, interpolation method, and available analysis capacities. In this study we compared different methods to calculate total sediment yield based on aliquot sampling intervals. The methods tested were (1) simple extrapolation of one sample until next sample was collected; (2) averaging between two successive samples; (3) extrapolation of the sediment concentration; (4) extrapolation using a regression function. The results indicate that all methods deliver more or less acceptable results, but errors between 10-25% would have to be taken into account for interpretation of the gained data. The first measurement interval causes highest deviations in almost all tested samples and methods. It is, therefore, essential to capture the initial flush of sediment from the plot most accurately, to be able to calculate reliable total values.

1. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

OpenAIRE

Wolf, Erika J.; Harrington, Kelly M.; Shaunna L Clark; Miller, Mark W.

2013-01-01

Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb. This study used Monte Carlo data simulation techniques to evaluate sample size requirements for common applied SEMs. Across a series of simulations, we...

2. Bayesian sample size determination for a clinical trial with correlated continuous and binary outcomes.

Science.gov (United States)

Stamey, James D; Natanegara, Fanni; Seaman, John W

2013-01-01

In clinical trials, multiple outcomes are often collected in order to simultaneously assess effectiveness and safety. We develop a Bayesian procedure for determining the required sample size in a regression model where a continuous efficacy variable and a binary safety variable are observed. The sample size determination procedure is simulation based. The model accounts for correlation between the two variables. Through examples we demonstrate that savings in total sample size are possible when the correlation between these two variables is sufficiently high.

3. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

Energy Technology Data Exchange (ETDEWEB)

Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

2008-12-15

The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

4. Issues of sample size in sensitivity and specificity analysis with special reference to oncology

Directory of Open Access Journals (Sweden)

Atul Juneja

2015-01-01

Full Text Available Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.

5. Issues of sample size in sensitivity and specificity analysis with special reference to oncology.

Science.gov (United States)

Juneja, Atul; Sharma, Shashi

2015-01-01

Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.

6. Sample Size for Measuring Grammaticality in Preschool Children from Picture-Elicited Language Samples

Science.gov (United States)

Eisenberg, Sarita L.; Guo, Ling-Yu

2015-01-01

Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…

7. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

Science.gov (United States)

Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

2016-06-01

When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for

8. Sampling surface particle size distributions and stability analysis of deep channel in the Pearl River Estuary

Science.gov (United States)

Feng, Hao-chuan; Zhang, Wei; Zhu, Yu-liang; Lei, Zhi-yi; Ji, Xiao-mei

2017-06-01

Particle size distributions (PSDs) of bottom sediments in a coastal zone are generally multimodal due to the complexity of the dynamic environment. In this paper, bottom sediments along the deep channel of the Pearl River Estuary (PRE) are used to understand the multimodal PSDs' characteristics and the corresponding depositional environment. The results of curve-fitting analysis indicate that the near-bottom sediments in the deep channel generally have a bimodal distribution with a fine component and a relatively coarse component. The particle size distribution of bimodal sediment samples can be expressed as the sum of two lognormal functions and the parameters for each component can be determined. At each station of the PRE, the fine component makes up less volume of the sediments and is relatively poorly sorted. The relatively coarse component, which is the major component of the sediments, is even more poorly sorted. The interrelations between the dynamics and particle size of the bottom sediment in the deep channel of the PRE have also been investigated by the field measurement and simulated data. The critical shear velocity and the shear velocity are calculated to study the stability of the deep channel. The results indicate that the critical shear velocity has a similar distribution over large part of the deep channel due to the similar particle size distribution of sediments. Based on a comparison between the critical shear velocities derived from sedimentary parameters and the shear velocities obtained by tidal currents, it is likely that the depositional area is mainly distributed in the northern part of the channel, while the southern part of the deep channel has to face higher erosion risk.

9. Estimating everyday portion size using a 'method of constant stimuli': in a student sample, portion size is predicted by gender, dietary behaviour, and hunger, but not BMI.

Science.gov (United States)

Brunstrom, Jeffrey M; Rogers, Peter J; Pothos, Emmanuel M; Calitri, Raff; Tapper, Katy

2008-09-01

This paper (i) explores the proposition that body weight is associated with large portion sizes and (ii) introduces a new technique for measuring everyday portion size. In our paradigm, the participant is shown a picture of a food portion and is asked to indicate whether it is larger or smaller than their usual portion. After responding to a range of different portions an estimate of everyday portion size is calculated using probit analysis. Importantly, this estimate is likely to be robust because it is based on many responses. First-year undergraduate students (N=151) completed our procedure for 12 commonly consumed foods. As expected, portion sizes were predicted by gender and by a measure of dieting and dietary restraint. Furthermore, consistent with reports of hungry supermarket shoppers, portion-size estimates tended to be higher in hungry individuals. However, we found no evidence for a relationship between BMI and portion size in any of the test foods. We consider reasons why this finding should be anticipated. In particular, we suggest that the difference in total energy expenditure of individuals with a higher and lower BMI is too small to be detected as a concomitant difference in portion size (at least in our sample).

10. The Quality of the Embedding Potential Is Decisive for Minimal Quantum Region Size in Embedding Calculations

DEFF Research Database (Denmark)

Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J

2017-01-01

The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally...

11. Post-stratified estimation: with-in strata and total sample size recommendations

Science.gov (United States)

James A. Westfall; Paul L. Patterson; John W. Coulston

2011-01-01

Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

12. Limitations of Significance Testing in Clinical Research: A Review of Multiple Comparison Corrections and Effect Size Calculations with Correlated Measures.

Science.gov (United States)

Vasilopoulos, Terrie; Morey, Timothy E; Dhatariya, Ketan; Rice, Mark J

2016-03-01

Modern clinical research commonly uses complex designs with multiple related outcomes, including repeated-measures designs. While multiple comparison corrections and effect size calculations are needed to more accurately assess an intervention's significance and impact, understanding the limitations of these methods in the case of dependency and correlation is important. In this review, we outline methods for multiple comparison corrections and effect size calculations and considerations in cases of correlation and summarize relevant simulation studies to illustrate these concepts.

13. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

Science.gov (United States)

2015-08-01

Interestingly, the dislocation plasticity of the single- crystal AlN strongly depends on specimen sizes. As shown in Fig. 5a and b, the large plastic...ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to-Ductile Transition of Single- Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to-Ductile Transition of Single- Crystal

14. Not too big, not too small: a goldilocks approach to sample size selection.

Science.gov (United States)

Broglio, Kristine R; Connor, Jason T; Berry, Scott M

2014-01-01

We present a Bayesian adaptive design for a confirmatory trial to select a trial's sample size based on accumulating data. During accrual, frequent sample size selection analyses are made and predictive probabilities are used to determine whether the current sample size is sufficient or whether continuing accrual would be futile. The algorithm explicitly accounts for complete follow-up of all patients before the primary analysis is conducted. We refer to this as a Goldilocks trial design, as it is constantly asking the question, "Is the sample size too big, too small, or just right?" We describe the adaptive sample size algorithm, describe how the design parameters should be chosen, and show examples for dichotomous and time-to-event endpoints.

15. Sample size determination in group-sequential clinical trials with two co-primary endpoints

Science.gov (United States)

Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

2014-01-01

We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

16. Guide for Calculating and Interpreting Effect Sizes and Confidence Intervals in Intellectual and Developmental Disability Research Studies

Science.gov (United States)

Dunst, Carl J.; Hamby, Deborah W.

2012-01-01

This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…

17. The impact of different sampling rates and calculation time intervals on ROTI values

Directory of Open Access Journals (Sweden)

Jacobsen Knut Stanley

2014-01-01

Full Text Available The ROTI (Rate of TEC index is a commonly used measure of ionospheric irregularities level. The algorithm to calculate ROTI is easily implemented, and is the same from paper to paper. However, the sample rate of the GNSS data used, and the time interval over which a value of ROTI is calculated, varies from paper to paper. When comparing ROTI values from different studies, this must be taken into account. This paper aims to show what these differences are, to increase the awareness of this issue. We have investigated the effect of different parameters for the calculation of ROTI values, using one year of data from 8 receivers at latitudes ranging from 59° N to 79° N. We have found that the ROTI values calculated using different parameter choices are strongly positively correlated. However, the ROTI values are quite different. The effect of a lower sample rate is to lower the ROTI value, due to the loss of high-frequency parts of the ROT spectrum, while the effect of a longer calculation time interval is to remove or reduce short-lived peaks due to the inherent smoothing effect. The ratio of ROTI values based on data of different sampling rate is examined in relation to the ROT power spectrum. Of relevance to statistical studies, we find that the median level of ROTI depends strongly on sample rate, strongly on latitude at auroral latitudes, and weakly on time interval. Thus, a baseline “quiet” or “noisy” level for one location or choice or parameters may not be valid for another location or choice of parameters.

18. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review

Science.gov (United States)

Miao, Yinglong; McCammon, J. Andrew

2016-01-01

Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631

19. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

Science.gov (United States)

Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

2017-06-01

We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

20. Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.

Science.gov (United States)

Power, Stephanie M; Matic, Damir B

2013-03-01

Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P  =  .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.

1. Empirically determining the sample size for large-scale gene network inference algorithms.

Science.gov (United States)

Altay, G

2012-04-01

The performance of genome-wide gene regulatory network inference algorithms depends on the sample size. It is generally considered that the larger the sample size, the better the gene network inference performance. Nevertheless, there is not adequate information on determining the sample size for optimal performance. In this study, the author systematically demonstrates the effect of sample size on information-theory-based gene network inference algorithms with an ensemble approach. The empirical results showed that the inference performances of the considered algorithms tend to converge after a particular sample size region. As a specific example, the sample size region around ≃64 is sufficient to obtain the most of the inference performance with respect to precision using the representative algorithm C3NET on the synthetic steady-state data sets of Escherichia coli and also time-series data set of a homo sapiens subnetworks. The author verified the convergence result on a large, real data set of E. coli as well. The results give evidence to biologists to better design experiments to infer gene networks. Further, the effect of cutoff on inference performances over various sample sizes is considered. [Includes supplementary material].

2. Reduced Sampling Size with Nanopipette for Tapping-Mode Scanning Probe Electrospray Ionization Mass Spectrometry Imaging.

Science.gov (United States)

Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi

2016-01-01

Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated.

3. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

Directory of Open Access Journals (Sweden)

Thomaz C. e C. da Costa

2004-12-01

Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

4. Efficient calculation of SAMPL4 hydration free energies using OMEGA, SZYBKI, QUACPAC, and Zap TK.

Science.gov (United States)

Ellingson, Benjamin A; Geballe, Matthew T; Wlodek, Stanislaw; Bayly, Christopher I; Skillman, A Geoffrey; Nicholls, Anthony

2014-03-01

Several submissions for the SAMPL4 hydration free energy set were calculated using OpenEye tools, including many that were among the top performing submissions. All of our best submissions used AM1BCC charges and Poisson-Boltzmann solvation. Three submissions used a single conformer for calculating the hydration free energy and all performed very well with mean unsigned errors ranging from 0.94 to 1.08 kcal/mol. These calculations were very fast, only requiring 0.5-2.0 s per molecule. We observed that our two single-conformer methodologies have different types of failure cases and that these differences could be exploited for determining when the methods are likely to have substantial errors.

5. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

Directory of Open Access Journals (Sweden)

Wei Lin Teoh

Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.

6. MUDMASTER: A Program for Calculating Crystalline Size Distributions and Strain from the Shapes of X-Ray Diffraction Peaks

Science.gov (United States)

Eberl, D.D.; Drits, V.A.; Środoń, Jan; Nüesch, R.

1996-01-01

Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.

7. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

Science.gov (United States)

Lawson, Chris A

2014-07-01

8. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

Science.gov (United States)

Usami, Satoshi

2017-03-01

Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

9. Publishing nutrition research: a review of sampling, sample size, statistical analysis, and other key elements of manuscript preparation, Part 2.

Science.gov (United States)

Boushey, Carol J; Harris, Jeffrey; Bruemmer, Barbara; Archer, Sujata L

2008-04-01

Members of the Board of Editors recognize the importance of providing a resource for researchers to insure quality and accuracy of reporting in the Journal. This second monograph of a periodic series focuses on study sample selection, sample size, and common statistical procedures using parametric methods, and the presentation of statistical methods and results. Attention to sample selection and sample size is critical to avoid study bias. When outcome variables adhere to a normal distribution, then parametric procedures can be used for statistical inference. Documentation that clearly outlines the steps used in the research process will advance the science of evidence-based practice in nutrition and dietetics. Real examples from problem sets and published literature are provided, as well as reference to books and online resources.

10. Including gaussian uncertainty on the background estimate for upper limit calculations using Poissonian sampling

CERN Document Server

Lista, L

2004-01-01

A procedure to include the uncertainty on the background estimate for upper limit calculations using Poissonian sampling is presented for the case where a Gaussian assumption on the uncertainty can be made. Under that hypothesis an analytic expression of the likelihood is derived which can be written in terms of polynomials defined by recursion. This expression may lead to a significant speed up of computing applications that extract the upper limits using Toy Monte Carlo.

11. Measurements of Plutonium and Americium in Soil Samples from Project 57 using the Suspended Soil Particle Sizing System (SSPSS)

Energy Technology Data Exchange (ETDEWEB)

John L. Bowen; Rowena Gonzalez; David S. Shafer

2001-05-01

As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site.

12. A multi-cyclone sampling array for the collection of size-segregated occupational aerosols.

Science.gov (United States)

Mischler, Steven E; Cauda, Emanuele G; Di Giuseppe, Michelangelo; Ortiz, Luis A

2013-01-01

In this study a serial multi-cyclone sampling array capable of simultaneously sampling particles of multiple size fractions, from an occupational environment, for use in in vivo and in vitro toxicity studies and physical/chemical characterization, was developed and tested. This method is an improvement over current methods used to size-segregate occupational aerosols for characterization, due to its simplicity and its ability to collect sufficient masses of nano- and ultrafine sized particles for analysis. This method was evaluated in a chamber providing a uniform atmosphere of dust concentrations using crystalline silica particles. The multi-cyclone sampling array was used to segregate crystalline silica particles into four size fractions, from a chamber concentration of 10 mg/m(3). The size distributions of the particles collected at each stage were confirmed, in the air, before and after each cyclone stage. Once collected, the particle size distribution of each size fraction was measured using light scattering techniques to further confirm the size distributions. As a final confirmation, scanning electron microscopy was used to collect images of each size fraction. The results presented here, using multiple measurement techniques, show that this multi-cyclone system was able to successfully collect distinct size-segregated particles at sufficient masses to perform toxicological evaluations and physical/chemical characterization.

13. Mineralogical, optical, geochemical, and particle size properties of four sediment samples for optical physics research

Science.gov (United States)

Bice, K.; Clement, S. C.

1981-01-01

X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.

14. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology.

Science.gov (United States)

Brown, Caleb Marshall; Vavrek, Matthew J

2015-01-01

Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes.

15. Applicability of submerged jet model to describe the liquid sample load into measuring chamber of micron and submillimeter sizes

Science.gov (United States)

Bulyanitsa, A. L.; Belousov, K. I.; Evstrapov, A. A.

2017-11-01

The load of a liquid sample into a measuring chamber is one of the stages of substance analysis in modern devices. Fluid flow is effectively calculated by numerical simulation using application packages, for example, COMSOL MULTIPHYSICS. In the same time it is often desirable to have an approximate analytical solution. The applicability of a submerged jet model for simulation the liquid sample load is considered for the chamber with sizes from hundreds micrometers to several millimeters. The paper examines the extent to which the introduction of amendments to the jet cutting and its replacement with an energy equivalent jet provide acceptable accuracy for evaluation of the loading process dynamics.

16. Sample size estimates for determining treatment effects in high-risk patients with early relapsing-remitting multiple sclerosis.

Science.gov (United States)

Scott, Thomas F; Schramke, Carol J; Cutter, Gary

2003-06-01

Risk factors for short-term progression in early relapsing remitting MS have been identified recently. Previously we determined potential risk factors for rapid progression of early relapsing remitting MS and identified three groups of high-risk patients. These non-mutually exclusive groups of patients were drawn from a consecutively studied sample of 98 patients with newly diagnosed MS. High-risk patients had a history of either poor recovery from initial attacks, more than two attacks in the first two years of disease, or a combination of at least four other risk factors. To determine differences in sample sizes required to show a meaningful treatment effect when using a high-risk sample versus a random sample of patients. Power analyses were used to calculate the different sample sizes needed for hypothetical treatment trials. We found that substantially smaller numbers of patients should be needed to show a significant treatment effect by employing these high-risk groups of patients as compared to a random population of MS patients (e.g., 58% reduction in sample size in one model). The use of patients at higher risk of progression to perform drug treatment trials can be considered as a means to reduce the number of patients needed to show a significant treatment effect for patients with very early MS.

17. The Procalcitonin And Survival Study (PASS – A Randomised multi-center investigator-initiated trial to investigate whether daily measurements biomarker Procalcitonin and pro-active diagnostic and therapeutic responses to abnormal Procalcitonin levels, can improve survival in intensive care unit patients. Calculated sample size (target population: 1000 patients

Directory of Open Access Journals (Sweden)

Fjeldborg Paul

2008-07-01

Full Text Available Abstract Background Sepsis and complications to sepsis are major causes of mortality in critically ill patients. Rapid treatment of sepsis is of crucial importance for survival of patients. The infectious status of the critically ill patient is often difficult to assess because symptoms cannot be expressed and signs may present atypically. The established biological markers of inflammation (leucocytes, C-reactive protein may often be influenced by other parameters than infection, and may be unacceptably slowly released after progression of an infection. At the same time, lack of a relevant antimicrobial therapy in an early course of infection may be fatal for the patient. Specific and rapid markers of bacterial infection have been sought for use in these patients. Methods Multi-centre randomized controlled interventional trial. Powered for superiority and non-inferiority on all measured end points. Complies with, "Good Clinical Practice" (ICH-GCP Guideline (CPMP/ICH/135/95, Directive 2001/20/EC. Inclusion: 1 Age ≥ 18 years of age, 2 Admitted to the participating intensive care units, 3 Signed written informed consent. Exclusion: 1 Known hyper-bilirubinaemia. or hypertriglyceridaemia, 2 Likely that safety is compromised by blood sampling, 3 Pregnant or breast feeding. Computerized Randomisation: Two arms (1:1, n = 500 per arm: Arm 1: standard of care. Arm 2: standard of care and Procalcitonin guided diagnostics and treatment of infection. Primary Trial Objective: To address whether daily Procalcitonin measurements and immediate diagnostic and therapeutic response on day-to-day changes in procalcitonin can reduce the mortality of critically ill patients. Discussion For the first time ever, a mortality-endpoint, large scale randomized controlled trial with a biomarker-guided strategy compared to the best standard of care, is conducted in an Intensive care setting. Results will, with a high statistical power answer the question: Can the survival

18. Sample Size Determination in a Chi-Squared Test Given Information from an Earlier Study.

Science.gov (United States)

Gillett, Raphael

1996-01-01

A rigorous method is outlined for using information from a previous study and explicitly taking into account the variability of an effect size estimate when determining sample size for a chi-squared test. This approach assures that the average power of all experiments in a discipline attains the desired level. (SLD)

19. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

Science.gov (United States)

Schoeneberger, Jason A.

2016-01-01

The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

20. Estimating sample size for a small-quadrat method of botanical ...

African Journals Online (AJOL)

... in eight plant communities in the Nylsvley Nature Reserve. Illustrates with a table. Keywords: Botanical surveys; Grass density; Grasslands; Mixed Bushveld; Nylsvley Nature Reserve; Quadrat size species density; Small-quadrat method; Species density; Species richness; botany; sample size; method; survey; south africa

1. OPTIMAL SAMPLE SIZE FOR STATISTICAL ANALYSIS OF WINTER WHEAT QUANTITATIVE TRAITS

OpenAIRE

Andrijana Eđed; Dražen Horvat; Zdenko Lončarić

2009-01-01

In the planning phase of every research particular attention should be dedicated to estimation of optimal sample size, aiming to obtain more precise and objective results of statistical analysis. The aim of this paper was to estimate optimal sample size of wheat yield components (plant height, spike length, number of spikelets per spike, number of grains per spike, weight of grains per spike and 1000 grains weight) for determination of statistically significant differences between two treatme...

2. Evaluation of different sized blood sampling tubes for thromboelastometry, platelet function, and platelet count

DEFF Research Database (Denmark)

Andreasen, Jo Bønding; Pistor-Riebold, Thea Unger; Knudsen, Ingrid Hell

2014-01-01

Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses and compa......Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses...

3. Sample size for equivalence trials: a case study from a vaccine lot consistency trial.

Science.gov (United States)

Ganju, Jitendra; Izu, Allen; Anemona, Alessandra

2008-08-30

For some trials, simple but subtle assumptions can have a profound impact on the size of the trial. A case in point is a vaccine lot consistency (or equivalence) trial. Standard sample size formulas used for designing lot consistency trials rely on only one component of variation, namely, the variation in antibody titers within lots. The other component, the variation in the means of titers between lots, is assumed to be equal to zero. In reality, some amount of variation between lots, however small, will be present even under the best manufacturing practices. Using data from a published lot consistency trial, we demonstrate that when the between-lot variation is only 0.5 per cent of the total variation, the increase in the sample size is nearly 300 per cent when compared with the size assuming that the lots are identical. The increase in the sample size is so pronounced that in order to maintain power one is led to consider a less stringent criterion for demonstration of lot consistency. The appropriate sample size formula that is a function of both components of variation is provided. We also discuss the increase in the sample size due to correlated comparisons arising from three pairs of lots as a function of the between-lot variance.

4. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

Directory of Open Access Journals (Sweden)

Smith Jonathan C.

2016-01-01

Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

5. A margin based approach to determining sample sizes via tolerance bounds.

Energy Technology Data Exchange (ETDEWEB)

Newcomer, Justin T.; Freeland, Katherine Elizabeth

2013-09-01

This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

6. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

Science.gov (United States)

Shih, Weichung Joe; Li, Gang; Wang, Yining

2016-03-01

Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

7. The effect of noise and sampling size on vorticity measurements in rotating fluids

Science.gov (United States)

Wong, Kelvin K. L.; Kelso, Richard M.; Mazumdar, Jagannath; Abbott, Derek

2008-11-01

This paper describes a new technique for presenting information based on given flow images. Using a multistep first order differentiation technique, we are able to map in two dimensions, vorticity of fluid within a region of investigation. We can then present the distribution of this property in space by means of a color intensity map. In particular, the state of fluid rotation can be displayed using maps of vorticity flow values. The framework that is implemented can also be used to quantify the vortices using statistical properties which can be derived from such vorticity flow maps. To test our methodology, we have devised artificial vortical flow fields using an analytical formulation of a single vortex. Reliability of vorticity measurement from our results shows that the size of flow vector sampling and noise in flow field affect the generation of vorticity maps. Based on histograms of these maps, we are able to establish an optimised configuration that computes vorticity fields to approximate the ideal vortex statistically. The novel concept outlined in this study can be used to reduce fluctuations of noise in a vorticity calculation based on imperfect flow information without excessive loss of its features, and thereby improves the effectiveness of flow

8. Optimal sample sizes for Welch's test under various allocation and cost considerations.

Science.gov (United States)

Jan, Show-Li; Shieh, Gwowen

2011-12-01

The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350-362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

9. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

Science.gov (United States)

Fienen, Michael N.; Selbig, William R.

2012-01-01

A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

10. SMALL SAMPLE SIZE IN 2X2 CROSS OVER DESIGNS: CONDITIONS OF DETERMINATION

Directory of Open Access Journals (Sweden)

B SOLEYMANI

2001-09-01

Full Text Available Introduction. Determination of small sample size in some clinical trials is a matter of importance. In cross-over studies which are one types of clinical trials, the matter is more significant. In this article, the conditions in which determination of small sample size in cross-over studies are possible were considered, and the effect of deviation from normality on the matter has been shown. Methods. The present study has been done on such 2x2 cross-over studies that variable of interest is quantitative one and is measurable by ratio or interval scale. The method of consideration is based on use of variable and sample mean"s distributions, central limit theorem, method of sample size determination in two groups, and cumulant or moment generating function. Results. In normal variables or transferable to normal variables, there is no restricting factors other than significant level and power of the test for determination of sample size, but in the case of non-normal variables, it should be determined such large that guarantee the normality of sample mean"s distribution. Discussion. In such cross over studies that because of existence of theoretical base, few samples can be computed, one should not do it without taking applied worth of results into consideration. While determining sample size, in addition to variance, it is necessary to consider distribution of variable, particularly through its skewness and kurtosis coefficients. the more deviation from normality, the more need of samples. Since in medical studies most of the continuous variables are closed to normal distribution, a few number of samples often seems to be adequate for convergence of sample mean to normal distribution.

11. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

Science.gov (United States)

Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

2015-01-01

The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

12. On tests of treatment-covariate interactions: An illustration of appropriate power and sample size calculations

National Research Council Canada - National Science Library

Gwowen Shieh

2017-01-01

.... Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA...

13. Constrained statistical inference: sample-size tables for ANOVA and regression

Directory of Open Access Journals (Sweden)

Leonard eVanbrabant

2015-01-01

Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

14. Calculating an optimal box size for ligand docking and virtual screening against experimental and predicted binding pockets.

Science.gov (United States)

Feinstein, Wei P; Brylinski, Michal

2015-01-01

Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to

15. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

Energy Technology Data Exchange (ETDEWEB)

Park, Ik Kyu; Kim, D. H.; Lee, W. J

2007-03-15

TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future.

16. [Explanation of samples sizes in current biomedical journals: an irrational requirement].

Science.gov (United States)

Silva Ayçaguer, Luis Carlos; Alonso Galbán, Patricia

2013-01-01

To discuss the theoretical relevance of current requirements for explanations of the sample sizes employed in published studies, and to assess the extent to which these requirements are currently met by authors and demanded by referees and editors. A literature review was conducted to gain insight into and critically discuss the possible rationale underlying the requirement of justifying sample sizes. A descriptive bibliometric study was then carried out based on the original studies published in the six journals with the highest impact factor in the field of health in 2009. All the arguments used to support the requirement of an explanation of sample sizes are feeble, and there are several reasons why they should not be endorsed. These instructions are neglected in most of the studies published in the current literature with the highest impact factor. In 56% (95%CI: 52-59) of the articles, the sample size used was not substantiated, and only 27% (95%CI: 23-30) met all the requirements contained in the guidelines adhered to by the journals studied. Based on this study, we conclude that there are no convincing arguments justifying the requirement for an explanation of how the sample size was reached in published articles. There is no sound basis for this requirement, which not only does not promote the transparency of research reports but rather contributes to undermining it. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.

17. Exploratory factor analysis with small sample sizes: a comparison of three approaches.

Science.gov (United States)

Jung, Sunho

2013-07-01

Exploratory factor analysis (EFA) has emerged in the field of animal behavior as a useful tool for determining and assessing latent behavioral constructs. Because the small sample size problem often occurs in this field, a traditional approach, unweighted least squares, has been considered the most feasible choice for EFA. Two new approaches were recently introduced in the statistical literature as viable alternatives to EFA when sample size is small: regularized exploratory factor analysis and generalized exploratory factor analysis. A simulation study is conducted to evaluate the relative performance of these three approaches in terms of factor recovery under various experimental conditions of sample size, degree of overdetermination, and level of communality. In this study, overdetermination and sample size are the meaningful conditions in differentiating the performance of the three approaches in factor recovery. Specifically, when there are a relatively large number of factors, regularized exploratory factor analysis tends to recover the correct factor structure better than the other two approaches. Conversely, when few factors are retained, unweighted least squares tends to recover the factor structure better. Finally, generalized exploratory factor analysis exhibits very poor performance in factor recovery compared to the other approaches. This tendency is particularly prominent as sample size increases. Thus, generalized exploratory factor analysis may not be a good alternative to EFA. Regularized exploratory factor analysis is recommended over unweighted least squares unless small expected number of factors is ensured. Copyright © 2013 Elsevier B.V. All rights reserved.

18. A simulation study provided sample size guidance for differential item functioning (DIF) studies using short scales.

Science.gov (United States)

Scott, Neil W; Fayers, Peter M; Aaronson, Neil K; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Gundy, Chad; Koller, Michael; Petersen, Morten A; Sprangers, Mirjam A G

2009-03-01

Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal logistic regression. Simulated data, representative of HRQoL scales with four-category items, were generated. The power and type I error rates of the DIF method were then investigated when, respectively, DIF was deliberately introduced and when no DIF was added. The sample size, scale length, floor effects (FEs) and significance level were varied. When there was no DIF, type I error rates were close to 5%. Detecting moderate uniform DIF in a two-item scale required a sample size of 300 per group for adequate (>80%) power. For longer scales, a sample size of 200 was adequate. Considerably larger sample sizes were required to detect nonuniform DIF, when there were extreme FEs or when a reduced type I error rate was required. The impact of the number of items in the scale was relatively small. Ordinal logistic regression successfully detects DIF for HRQoL instruments with short scales. Sample size guidelines are provided.

19. MRI derived brain atrophy in PSP and MSA-P. Determining sample size to detect treatment effects.

Science.gov (United States)

Paviour, Dominic C; Price, Shona L; Lees, Andrew J; Fox, Nick C

2007-04-01

Progressive supranuclear palsy (PSP) and multiple system (MSA) atrophy are associated with progressive brain atrophy. Serial MRI can be applied in order to measure this change in brain volume and to calculate atrophy rates. We evaluated MRI derived whole brain and regional atrophy rates as potential markers of progression in PSP and the Parkinsonian variant of multiple system atrophy (MSA-P). 17 patients with PSP, 9 with MSA-P and 18 healthy controls underwent two MRI brain scans. MRI scans were registered, and brain and regional atrophy rates (midbrain, pons, cerebellum, third and lateral ventricles) measured. Sample sizes required to detect the effect of a proposed disease-modifying treatment were estimated. The effect of scan interval on the variance of the atrophy rates and sample size was assessed. Based on the calculated yearly rates of atrophy, for a drug effect equivalent to a 30% reduction in atrophy, fewer PSP subjects are required in each treatment arm when using midbrain rather than whole brain atrophy rates (183 cf. 499). Fewer MSA-P subjects are required, using pontine/cerebellar, rather than whole brain atrophy rates (164/129 cf. 794). A reduction in the variance of measured atrophy rates was observed with a longer scan interval. Regional rather than whole brain atrophy rates calculated from volumetric serial MRI brain scans in PSP and MSA-P provide a more practical and powerful means of monitoring disease progression in clinical trials.

20. Monte Carlo approaches for determining power and sample size in low-prevalence applications.

Science.gov (United States)

Williams, Michael S; Ebel, Eric D; Wagner, Bruce A

2007-11-15

The prevalence of disease in many populations is often low. For example, the prevalence of tuberculosis, brucellosis, and bovine spongiform encephalopathy range from 1 per 100,000 to less than 1 per 1,000,000 in many countries. When an outbreak occurs, epidemiological investigations often require comparing the prevalence in an exposed population with that of an unexposed population. To determine if the level of disease in the two populations is significantly different, the epidemiologist must consider the test to be used, desired power of the test, and determine the appropriate sample size for both the exposed and unexposed populations. Commonly available software packages provide estimates of the required sample sizes for this application. This study shows that these estimated sample sizes can exceed the necessary number of samples by more than 35% when the prevalence is low. We provide a Monte Carlo-based solution and show that in low-prevalence applications this approach can lead to reductions in the total samples size of more than 10,000 samples.

1. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

Directory of Open Access Journals (Sweden)

Mark Heckmann

2017-01-01

Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

2. Species-genetic diversity correlations in habitat fragmentation can be biased by small sample sizes.

Science.gov (United States)

Nazareno, Alison G; Jump, Alistair S

2012-06-01

Predicted parallel impacts of habitat fragmentation on genes and species lie at the core of conservation biology, yet tests of this rule are rare. In a recent article in Ecology Letters, Struebig et al. (2011) report that declining genetic diversity accompanies declining species diversity in tropical forest fragments. However, this study estimates diversity in many populations through extrapolation from very small sample sizes. Using the data of this recent work, we show that results estimated from the smallest sample sizes drive the species-genetic diversity correlation (SGDC), owing to a false-positive association between habitat fragmentation and loss of genetic diversity. Small sample sizes are a persistent problem in habitat fragmentation studies, the results of which often do not fit simple theoretical models. It is essential, therefore, that data assessing the proposed SGDC are sufficient in order that conclusions be robust.

3. Rapid Calculation Program of Certain Sizes used in design of Synchronous Generators

Directory of Open Access Journals (Sweden)

Elisabeta Spunei

2011-10-01

Full Text Available This paper presents a program of rapid determination of certain sizes required in the design of synchronous machines using Mathcad software. During the design of electrical machines are phases in which certain sizes are extracted from different tables depending on certain variables. This operation is difficult and sometimes hard to do. To eliminate this problem and greatly shorten the time of determination of sizes and to ensure accurate values we have designed a program allowing even interpolation between two known values. The program developed applied, in this paper, to quickly determine the value of the voltage form factor kB and the value of the ideal polar coverage coefficient αi of polar step τ.

4. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

Science.gov (United States)

Bergh, Daniel

2015-01-01

Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

5. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

Science.gov (United States)

Hanel, Paul H P; Haase, Jennifer

2017-01-01

In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

6. Max control chart with adaptive sample sizes for jointly monitoring process mean and standard deviation

OpenAIRE

Ching Chun Huang

2014-01-01

This paper develops the two-state and three-state adaptive sample size control schemes based on the Max chart to simultaneously monitor the process mean and standard deviation. Since the Max chart is a single variables control chart where only one plotting statistic is needed, the design and operation of adaptive sample size schemes for this chart will be simpler than those for the joint [Xmacr ] and S charts. Three types of processes including on-target initial, off-target initial and steady...

7. Bayesian sample size determination for cost-effectiveness studies with censored data.

Directory of Open Access Journals (Sweden)

Daniel P Beavers

Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

8. The influence of sampling unit size and spatial arrangement patterns on neighborhood-based spatial structure analyses of forest stands

Energy Technology Data Exchange (ETDEWEB)

Wang, H.; Zhang, G.; Hui, G.; Li, Y.; Hu, Y.; Zhao, Z.

2016-07-01

Aim of study: Neighborhood-based stand spatial structure parameters can quantify and characterize forest spatial structure effectively. How these neighborhood-based structure parameters are influenced by the selection of different numbers of nearest-neighbor trees is unclear, and there is some disagreement in the literature regarding the appropriate number of nearest-neighbor trees to sample around reference trees. Understanding how to efficiently characterize forest structure is critical for forest management. Area of study: Multi-species uneven-aged forests of Northern China. Material and methods: We simulated stands with different spatial structural characteristics and systematically compared their structure parameters when two to eight neighboring trees were selected. Main results: Results showed that values of uniform angle index calculated in the same stand were different with different sizes of structure unit. When tree species and sizes were completely randomly interspersed, different numbers of neighbors had little influence on mingling and dominance indices. Changes of mingling or dominance indices caused by different numbers of neighbors occurred when the tree species or size classes were not randomly interspersed and their changing characteristics can be detected according to the spatial arrangement patterns of tree species and sizes. Research highlights: The number of neighboring trees selected for analyzing stand spatial structure parameters should be fixed. We proposed that the four-tree structure unit is the best compromise between sampling accuracy and costs for practical forest management. (Author)

9. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

Science.gov (United States)

Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

2014-12-19

In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

10. Sample size bounding and context ranking as approaches to the HRA data problem

Energy Technology Data Exchange (ETDEWEB)

Reer, Bernhard

2004-02-01

This paper presents a technique denoted as sub sample size bounding (SSSB) useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications for human reliability analysis (HRA) are emphasized in the presentation of the technique. Exemplified by a sample of 180 abnormal event sequences, it is outlined how SSSB can provide viable input for the quantification of errors of commission (EOCs)

11. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

Energy Technology Data Exchange (ETDEWEB)

Reer, B

2004-03-01

The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

12. SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA

OpenAIRE

2003-01-01

Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the resul...

13. An Airplane Calculator Featuring a High- Fidelity Methodology for Tailplane Sizing

OpenAIRE

Mattos, Bento Silva de; Secco, Ney Rafael

2013-01-01

ABSTRACT: The present work is concerned with the accurate modeling of transport airplanes. This is of primary importance to reduce aircraft development risks and because multi-disciplinary design and optimization (MDO) frameworks require an accurate airplane modeling to carry out realistic optimization tasks. However, most of them still make use of tail volume coefficients approach for sizing horizontal and vertical tail areas. The tail-volume coefficient method is based on historical aircraf...

14. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

Science.gov (United States)

McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

2010-01-01

This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

15. B-graph sampling to estimate the size of a hidden population

NARCIS (Netherlands)

Spreen, M.; Bogaerts, S.

2015-01-01

Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

16. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

Science.gov (United States)

Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

2000-01-01

Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

17. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

Science.gov (United States)

Cook, David A.; Hatala, Rose

2015-01-01

Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

18. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

Science.gov (United States)

Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

2014-01-01

Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

19. Precise confidence intervals of regression-based reference limits: Method comparisons and sample size requirements.

Science.gov (United States)

Shieh, Gwowen

2017-12-01

Covariate-dependent reference limits have been extensively applied in biology and medicine for determining the substantial magnitude and relative importance of quantitative measurements. Confidence interval and sample size procedures are available for studying regression-based reference limits. However, the existing popular methods employ different technical simplifications and are applicable only in certain limited situations. This paper describes exact confidence intervals of regression-based reference limits and compares the exact approach with the approximate methods under a wide range of model configurations. Using the ratio between the widths of confidence interval and reference interval as the relative precision index, optimal sample size procedures are presented for precise interval estimation under expected ratio and tolerance probability considerations. Simulation results show that the approximate interval methods using normal distribution have inaccurate confidence limits. The exact confidence intervals dominate the approximate procedures in one- and two-sided coverage performance. Unlike the current simplifications, the proposed sample size procedures integrate all key factors including covariate features in the optimization process and are suitable for various regression-based reference limit studies with potentially diverse configurations. The exact interval estimation has theoretical and practical advantages over the approximate methods. The corresponding sample size procedures and computing algorithms are also presented to facilitate the data analysis and research design of regression-based reference limits. Copyright © 2017 Elsevier Ltd. All rights reserved.

20. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

Science.gov (United States)

Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

2017-01-01

Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

1. Influence of tree spatial pattern and sample plot type and size on inventory

Science.gov (United States)

John-Pascall Berrill; Kevin L. O' Hara

2012-01-01

Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

2. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

Czech Academy of Sciences Publication Activity Database

2015-01-01

Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

3. Estimating the Size of a Large Network and its Communities from a Random Sample.

Science.gov (United States)

Chen, Lin; Karbasi, Amin; Crawford, Forrest W

2016-01-01

Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

4. [Metacarpophalangeal and carpal numeric indices to calculate bone age and predict adult size].

Science.gov (United States)

Ebrí Torné, B; Ebrí Verde, I

2012-04-01

5. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

Directory of Open Access Journals (Sweden)

John M Lachin

Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

6. Percolating macropore networks in tilled topsoil: effects of sample size, minimum pore thickness and soil type

Science.gov (United States)

Jarvis, Nicholas; Larsbo, Mats; Koestel, John; Keck, Hannes

2017-04-01

The long-range connectivity of macropore networks may exert a strong control on near-saturated and saturated hydraulic conductivity and the occurrence of preferential flow through soil. It has been suggested that percolation concepts may provide a suitable theoretical framework to characterize and quantify macropore connectivity, although this idea has not yet been thoroughly investigated. We tested the applicability of percolation concepts to describe macropore networks quantified by X-ray scanning at a resolution of 0.24 mm in eighteen cylinders (20 cm diameter and height) sampled from the ploughed layer of four soils of contrasting texture in east-central Sweden. The analyses were performed for sample sizes ("regions of interest", ROI) varying between 3 and 12 cm in cube side-length and for minimum pore thicknesses ranging between image resolution and 1 mm. Finite sample size effects were clearly found for ROI's of cube side-length smaller than ca. 6 cm. For larger sample sizes, the results showed the relevance of percolation concepts to soil macropore networks, with a close relationship found between imaged porosity and the fraction of the pore space which percolated (i.e. was connected from top to bottom of the ROI). The percolating fraction increased rapidly as a function of porosity above a small percolation threshold (1-4%). This reflects the ordered nature of the pore networks. The percolation relationships were similar for all four soils. Although pores larger than 1 mm appeared to be somewhat better connected, only small effects of minimum pore thickness were noted across the range of tested pore sizes. The utility of percolation concepts to describe the connectivity of more anisotropic macropore networks (e.g. in subsoil horizons) should also be tested, although with current X-ray scanning equipment it may prove difficult in many cases to analyze sufficiently large samples that would avoid finite size effects.

7. The impact of sample size and marker selection on the study of haplotype structures

Directory of Open Access Journals (Sweden)

Sun Xiao

2004-03-01

Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

8. Sample size determination for assessing equivalence based on proportion ratio under a randomized trial with non-compliance and missing outcomes.

Science.gov (United States)

Lui, Kung-Jong; Chang, Kuang-Chao

2008-01-15

When a generic drug is developed, it is important to assess the equivalence of therapeutic efficacy between the new and the standard drugs. Although the number of publications on testing equivalence and its relevant sample size determination is numerous, the discussion on sample size determination for a desired power of detecting equivalence under a randomized clinical trial (RCT) with non-compliance and missing outcomes is limited. In this paper, we derive under the compound exclusion restriction model the maximum likelihood estimator (MLE) for the ratio of probabilities of response among compliers between two treatments in a RCT with both non-compliance and missing outcomes. Using the MLE with the logarithmic transformation, we develop an asymptotic test procedure for assessing equivalence and find that this test procedure can perform well with respect to type I error based on Monte Carlo simulation. We further develop a sample size calculation formula for a desired power of detecting equivalence at a nominal alpha-level. To evaluate the accuracy of the sample size calculation formula, we apply Monte Carlo simulation again to calculate the simulated power of the proposed test procedure corresponding to the resulting sample size for a desired power of 80 per cent at 0.05 level in a variety of situations. We also include a discussion on determining the optimal ratio of sample size allocation subject to a desired power to minimize a linear cost function and provide a sensitivity analysis of the sample size formula developed here under an alterative model with missing at random. Copyright (c) 2007 John Wiley & Sons, Ltd.

9. Performance of a reciprocal shaker in mechanical dispersion of soil samples for particle-size analysis

Directory of Open Access Journals (Sweden)

2012-08-01

Full Text Available The dispersion of the samples in soil particle-size analysis is a fundamental step, which is commonly achieved with a combination of chemical agents and mechanical agitation. The purpose of this study was to evaluate the efficiency of a low-speed reciprocal shaker for the mechanical dispersion of soil samples of different textural classes. The particle size of 61 soil samples was analyzed in four replications, using the pipette method to determine the clay fraction and sieving to determine coarse, fine and total sand fractions. The silt content was obtained by difference. To evaluate the performance, the results of the reciprocal shaker (RSh were compared with data of the same soil samples available in reports of the Proficiency testing for Soil Analysis Laboratories of the Agronomic Institute of Campinas (Prolab/IAC. The accuracy was analyzed based on the maximum and minimum values defining the confidence intervals for the particle-size fractions of each soil sample. Graphical indicators were also used for data comparison, based on dispersion and linear adjustment. The descriptive statistics indicated predominantly low variability in more than 90 % of the results for sand, medium-textured and clay samples, and for 68 % of the results for heavy clay samples, indicating satisfactory repeatability of measurements with the RSh. Medium variability was frequently associated with silt, followed by the fine sand fraction. The sensitivity analyses indicated an accuracy of 100 % for the three main separates (total sand, silt and clay, in all 52 samples of the textural classes heavy clay, clay and medium. For the nine sand soil samples, the average accuracy was 85.2 %; highest deviations were observed for the silt fraction. In relation to the linear adjustments, the correlation coefficients of 0.93 (silt or > 0.93 (total sand and clay, as well as the differences between the angular coefficients and the unit < 0.16, indicated a high correlation between the

10. B-Graph Sampling to Estimate the Size of a Hidden Population

Directory of Open Access Journals (Sweden)

Spreen Marinus

2015-12-01

Full Text Available Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is respondent-driven sampling in which no sampling frame is used. However, in some studies multiple but incomplete sampling frames are available. In this article, we introduce the B-graph design that can be used in such situations. In this design, all available incomplete sampling frames are joined and turned into one sampling frame, from which a random sample is drawn and selected respondents are asked to mention their contacts. By considering the population as a bipartite graph of a two-mode network (those from the sampling frame and those who are not on the frame, the number of respondents who are directly linked to the sampling frame members can be estimated using Chao’s and Zelterman’s estimators for sparse data. The B-graph sampling design is illustrated using the data of a social network study from Utrecht, the Netherlands.

11. Planosol soil sample size for computerized tomography measurement of physical parameters

Directory of Open Access Journals (Sweden)

Pedrotti Alceu

2003-01-01

Full Text Available Computerized tomography (CT is an important tool in Soil Science for noninvasive measurement of density and water content of soil samples. This work aims to describe the aspects of sample size adequacy for Planosol (Albaqualf and to evaluate procedures for statistical analysis, using a CT scanner with a 241Am source. Density errors attributed to the equipment are 0.051 and 0.046 Mg m-3 for horizons A and B, respectively. The theoretical value for sample thickness for the Planosol, using this equipment, is 4.0 cm for the horizons A and B. The ideal thickness of samples is approximately 6.0 cm, being smaller for samples of the horizon B in relation to A. Alternatives for the improvement of the efficiency analysis and the reliability of the results obtained by CT are also discussed, and indicate good precision and adaptability of the application of this technology in Planosol (Albaqualf studies.

12. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

Energy Technology Data Exchange (ETDEWEB)

Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

2014-01-01

The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

13. Evaluation of an enoxaparin dosing calculator using burn size and weight.

Science.gov (United States)

Faraklas, Iris; Ghanem, Maureen; Brown, Amalia; Cochran, Amalia

2013-01-01

Previous research has shown that inadequate antifactor Xa levels (anti-Xa) occur in burn patients and may increase the risk of venous thromboembolic events (VTE). The objective of this retrospective review was to investigate the usefulness of an enoxaparin dosing algorithm using a previously published equation. With institutional review board approval, all acute burn patients at an American Burn Association-verified regional burn center who were treated with enoxaparin for VTE prophylaxis and had at least one anti-Xa from May 1, 2011 to December 15, 2012 were included. Patients with subprophylactic anti-Xa received increased enoxaparin dose per unit protocol with the goal of obtaining a prophylactic anti-Xa (0.2-0.4 U/ml). Sixty-four patients were included in our analysis. The regression equation was used in 33 patients for initial enoxaparin dosing (Eq) whereas 31 patients received traditionally recommended prophylaxis dosing (No-Eq). Groups were comparable in sex, age, weight, inhalation injury, and burn size. Initial enoxaparin dosing in Eq was significantly more likely to reach target than in No-Eq (73 vs 32%; P = .002). No episodes of hemorrhage, thrombocytopenia, or heparin sensitivity were documented in either group. Median final enoxaparin dose required to reach prophylactic level was 40 mg every 12 hours (range, 30-80 mg). Twenty-one No-Eq patients ultimately reached target, and 11 of these final doses were equivalent to or greater than the predicted equation. Ten patients never reached prophylactic anti-Xa before enoxaparin was discontinued (nine from No-Eq). Two patients, one from each group, developed VTE complications despite appropriate anti-Xa for prophylaxis. A strong correlation was shown between weight, burn size, and enoxaparin dose (r = .68; P injury are highly variable. This simple equation improves enoxaparin dosing for acute adult burn patients.

14. The role of the upper sample size limit in two-stage bioequivalence designs.

Science.gov (United States)

Karalis, Vangelis

2013-11-01

Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs. Copyright © 2013 Elsevier B.V. All rights reserved.

15. A simple method for estimating genetic diversity in large populations from finite sample sizes

Directory of Open Access Journals (Sweden)

Rajora Om P

2009-12-01

Full Text Available Abstract Background Sample size is one of the critical factors affecting the accuracy of the estimation of population genetic diversity parameters. Small sample sizes often lead to significant errors in determining the allelic richness, which is one of the most important and commonly used estimators of genetic diversity in populations. Correct estimation of allelic richness in natural populations is challenging since they often do not conform to model assumptions. Here, we introduce a simple and robust approach to estimate the genetic diversity in large natural populations based on the empirical data for finite sample sizes. Results We developed a non-linear regression model to infer genetic diversity estimates in large natural populations from finite sample sizes. The allelic richness values predicted by our model were in good agreement with those observed in the simulated data sets and the true allelic richness observed in the source populations. The model has been validated using simulated population genetic data sets with different evolutionary scenarios implied in the simulated populations, as well as large microsatellite and allozyme experimental data sets for four conifer species with contrasting patterns of inherent genetic diversity and mating systems. Our model was a better predictor for allelic richness in natural populations than the widely-used Ewens sampling formula, coalescent approach, and rarefaction algorithm. Conclusions Our regression model was capable of accurately estimating allelic richness in natural populations regardless of the species and marker system. This regression modeling approach is free from assumptions and can be widely used for population genetic and conservation applications.

16. Appraising current methods for preclinical calculation of burn size - A pre-hospital perspective.

Science.gov (United States)

Thom, David

2017-02-01

Calculation of the percentage of total body surface area burnt is a vital tool in the assessment and management of patients sustaining burns. Guiding both treatment and management protocols. Currently there is debate as to which method of estimation is the most appropriate for pre-hospital use. A literature review was undertaken to appraise current literature and determine the most appropriate methods for the pre-hospital setting. The review utilised MEDLINE and structured hand searching of Science Direct, OpenAthens, COCHRANE and Google Scholar. Fourteen studies were identified for review comparing various methods. The palm including digits was identified to represent 0.8% of total body surface area with the palm excluding digits representing 0.5%. Wallace's Rule of Nines was found to be an appropriate method of estimation. Variation in accuracy is accountable to expertise, experience and patients body type however current technology and smartphone applications are attempting to counter this. Palm including digits measurements multiplied by 0.8 is suitable for assessing minor (<10%) burns however for larger burns Wallace's Rule of Nines is advocated. Further development of technology suggests computerised applications will become more commonplace. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

17. Calculation of upper confidence bounds on not-sampled vegetation types using a systematic grid sample: An application to map unit definition for existing vegetation maps

Science.gov (United States)

Paul L. Patterson; Mark Finco

2009-01-01

This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...

18. Peer groups splitting in Croatian EQA scheme: a trade-off between homogeneity and sample size number.

Science.gov (United States)

2017-03-01

Laboratory evaluation through external quality assessment (EQA) schemes is often performed as 'peer group' comparison under the assumption that matrix effects influence the comparisons between results of different methods, for analytes where no commutable materials with reference value assignment are available. With EQA schemes that are not large but have many available instruments and reagent options for same analyte, homogenous peer groups must be created with adequate number of results to enable satisfactory statistical evaluation. We proposed a multivariate analysis of variance (MANOVA)-based test to evaluate heterogeneity of peer groups within the Croatian EQA biochemistry scheme and identify groups where further splitting might improve laboratory evaluation. EQA biochemistry results were divided according to instruments used per analyte and the MANOVA test was used to verify statistically significant differences between subgroups. The number of samples was determined by sample size calculation ensuring a power of 90% and allowing the false flagging rate to increase not more than 5%. When statistically significant differences between subgroups were found, clear improvement of laboratory evaluation was assessed before splitting groups. After evaluating 29 peer groups, we found strong evidence for further splitting of six groups. Overall improvement of 6% reported results were observed, with the percentage being as high as 27.4% for one particular method. Defining maximal allowable differences between subgroups based on flagging rate change, followed by sample size planning and MANOVA, identifies heterogeneous peer groups where further splitting improves laboratory evaluation and enables continuous monitoring for peer group heterogeneity within EQA schemes.

19. Evaluating the performance of species richness estimators: sensitivity to sample grain size

DEFF Research Database (Denmark)

Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

2006-01-01

scores in a number of estimators (the above-mentioned plus ICE, Chao2, Michaelis-Menten, Negative Exponential and Clench). The estimations from those four sample sizes were also highly correlated. 4.  Contrary to other studies, we conclude that most species richness estimators may be useful......Fifteen species richness estimators (three asymptotic based on species accumulation curves, 11 nonparametric, and one based in the species-area relationship) were compared by examining their performance in estimating the total species richness of epigean arthropods in the Azorean Laurisilva forests...... different sampling units on species richness estimations. 2.  Estimated species richness scores depended both on the estimator considered and on the grain size used to aggregate data. However, several estimators (ACE, Chao1, Jackknife1 and 2 and Bootstrap) were precise in spite of grain variations. Weibull...

20. Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test

Science.gov (United States)

Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke

Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.

1. Decision rules and associated sample size planning for regional approval utilizing multiregional clinical trials.

Science.gov (United States)

Chen, Xiaoyuan; Lu, Nelson; Nair, Rajesh; Xu, Yunling; Kang, Cailian; Huang, Qin; Li, Ning; Chen, Hongzhuan

2012-09-01

Multiregional clinical trials provide the potential to make safe and effective medical products simultaneously available to patients globally. As regulatory decisions are always made in a local context, this poses huge regulatory challenges. In this article we propose two conditional decision rules that can be used for medical product approval by local regulatory agencies based on the results of a multiregional clinical trial. We also illustrate sample size planning for such trials.

2. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

OpenAIRE

Mark Heckmann; Lukas Burk

2017-01-01

The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...

3. Epidemiological Studies Based on Small Sample Sizes – A Statistician's Point of View

OpenAIRE

Ersbøll Annette; Ersbøll Bjarne

2003-01-01

We consider 3 basic steps in a study, which have relevance for the statistical analysis. They are: study design, data quality, and statistical analysis. While statistical analysis is often considered an important issue in the literature and the choice of statistical method receives much attention, less emphasis seems to be put on study design and necessary sample sizes. Finally, a very important step, namely assessment and validation of the quality of the data collected seems to be completel...

4. Estimating the Size of a Large Network and its Communities from a Random Sample

CERN Document Server

Chen, Lin; Crawford, Forrest W

2016-01-01

Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...

5. A Bayesian adaptive blinded sample size adjustment method for risk differences.

Science.gov (United States)

Hartley, Andrew Montgomery

2015-01-01

Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non-inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.

6. Sample Size and Probability Threshold Considerations with the Tailored Data Method.

Science.gov (United States)

This article discusses sample size and probability threshold considerations in the use of the tailored data method with the Rasch model. In the tailored data method, one performs an initial Rasch analysis and then reanalyzes data after setting item responses to missing that are below a chosen probability threshold. A simple analytical formula is provided that can be used to check whether or not the application of the tailored data method with a chosen probability threshold will create situations in which the number of remaining item responses for the Rasch calibration will or will not meet minimum sample size requirements. The formula is illustrated using a real data example from a medical imaging licensure exam with several different probability thresholds. It is shown that as the probability threshold was increased more item responses were set to missing and the parameter standard errors and item difficulty estimates also tended to increase. It is suggested that some consideration should be given to the chosen probability threshold and how this interacts with potential examinee sample sizes and the accuracy of parameter estimates when calibrating data with the tailored data method.

7. SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA

Directory of Open Access Journals (Sweden)

2003-06-01

Full Text Available Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the result of sample size determination in non-randomnized surval analysis with censored and non -censored data. Methods: In non-randomnized survival studies, linear regression with fixed -effect variable could be used. In fact such a regression is conditional expectation of dependent variable, conditioned on independent variable. Likelihood fuction with exponential hazard constructed by considering binary variable for allocation of each subject to one of two comparing groups, stating the variance of coefficient of fixed - effect independent variable by determination coefficient , sample size determination formulas are obtained with both censored and non-cencored data. So estimation of sample size is not based on the relation of a single independent variable but it could be attain the required power for a test adjusted for effect of the other explanatory covariates. Since the asymptotic distribution of the likelihood estimator of parameter is normal, we obtained the variance of the regression coefficient estimator formula then by stating the variance of regression coefficient of fixed-effect variable, by determination coefficient we derived formulas for determination of sample size in both censored and non-censored data. Results: In no-randomnized survival analysis ,to compare hazard rates of two groups without censored data, we obtained an estimation of determination coefficient ,risk ratio and proportion of membership to each group and their variances from

8. Effect of Reiki therapy on pain and anxiety in adults: an in-depth literature review of randomized trials with effect size calculations.

Science.gov (United States)

Thrane, Susan; Cohen, Susan M

2014-12-01

9. Calculation of the cluster size distribution functions and small-angle neutron scattering data for C60/N-methylpyrrolidone

Science.gov (United States)

Tropin, T. V.; Jargalan, N.; Avdeev, M. V.; Kyzyma, O. A.; Sangaa, D.; Aksenov, V. L.

2014-01-01

The aggregate growth in a C60/N-methylpyrrolidone (NMP) solution has been considered in the framework of the approach developed earlier for describing the cluster growth kinetics in fullerene polar solutions. The final cluster size distribution functions in model solutions have been estimated for two fullerene aggregation models including the influence of complex formation on the cluster growth using extrapolations of the characteristics of the cluster state and distribution parameters. Based on the obtained results, the model curves of small-angle neutron scattering have been calculated for a C60/NMP solution at various values of the model parameters.

10. Forest inventory using multistage sampling with probability proportional to size. [Brazil

Science.gov (United States)

Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

1984-01-01

A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

11. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

Science.gov (United States)

Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

2017-06-30

Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

12. Relative power and sample size analysis on gene expression profiling data

Directory of Open Access Journals (Sweden)

den Dunnen JT

2009-09-01

Full Text Available Abstract Background With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, preparation and processing. Objective measures that help experimental design, given own pilot data, are thus fundamental. Results Relative power and sample size analysis were performed on two distinct data sets. The first set consisted of Affymetrix array data derived from a nutrigenomics experiment in which weak, intermediate and strong PPARα agonists were administered to wild-type and PPARα-null mice. Our analysis confirms the hierarchy of PPARα-activating compounds previously reported and the general idea that larger effect sizes positively contribute to the average power of the experiment. A simulation experiment was performed that mimicked the effect sizes seen in the first data set. The relative power was predicted but the estimates were slightly conservative. The second, more challenging, data set describes a microarray platform comparison study using hippocampal δC-doublecortin-like kinase transgenic mice that were compared to wild-type mice, which was combined with results from Solexa/Illumina deep sequencing runs. As expected, the choice of technology greatly influences the performance of the experiment. Solexa/Illumina deep sequencing has the highest overall power followed by the microarray platforms Agilent and Affymetrix. Interestingly, Solexa/Illumina deep sequencing displays comparable power across all intensity ranges, in contrast with microarray platforms that have decreased power in the low intensity range due to background noise. This means that deep sequencing technology is especially more powerful in detecting differences in the low intensity range, compared to microarray platforms. Conclusion Power and sample size analysis

13. Prediction accuracy of a sample-size estimation method for ROC studies.

Science.gov (United States)

Chakraborty, Dev P

2010-05-01

14. Relative power and sample size analysis on gene expression profiling data

Science.gov (United States)

van Iterson, M; 't Hoen, PAC; Pedotti, P; Hooiveld, GJEJ; den Dunnen, JT; van Ommen, GJB; Boer, JM; Menezes, RX

2009-01-01

Background With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, preparation and processing. Objective measures that help experimental design, given own pilot data, are thus fundamental. Results Relative power and sample size analysis were performed on two distinct data sets. The first set consisted of Affymetrix array data derived from a nutrigenomics experiment in which weak, intermediate and strong PPARα agonists were administered to wild-type and PPARα-null mice. Our analysis confirms the hierarchy of PPARα-activating compounds previously reported and the general idea that larger effect sizes positively contribute to the average power of the experiment. A simulation experiment was performed that mimicked the effect sizes seen in the first data set. The relative power was predicted but the estimates were slightly conservative. The second, more challenging, data set describes a microarray platform comparison study using hippocampal δC-doublecortin-like kinase transgenic mice that were compared to wild-type mice, which was combined with results from Solexa/Illumina deep sequencing runs. As expected, the choice of technology greatly influences the performance of the experiment. Solexa/Illumina deep sequencing has the highest overall power followed by the microarray platforms Agilent and Affymetrix. Interestingly, Solexa/Illumina deep sequencing displays comparable power across all intensity ranges, in contrast with microarray platforms that have decreased power in the low intensity range due to background noise. This means that deep sequencing technology is especially more powerful in detecting differences in the low intensity range, compared to microarray platforms. Conclusion Power and sample size analysis based on pilot data give

15. Generalized sample size determination formulas for experimental research with hierarchical data.

Science.gov (United States)

Usami, Satoshi

2014-06-01

Hierarchical data sets arise when the data for lower units (e.g., individuals such as students, clients, and citizens) are nested within higher units (e.g., groups such as classes, hospitals, and regions). In data collection for experimental research, estimating the required sample size beforehand is a fundamental question for obtaining sufficient statistical power and precision of the focused parameters. The present research extends previous research from Heo and Leon (2008) and Usami (2011b), by deriving closed-form formulas for determining the required sample size to test effects in experimental research with hierarchical data, and by focusing on both multisite-randomized trials (MRTs) and cluster-randomized trials (CRTs). These formulas consider both statistical power and the width of the confidence interval of a standardized effect size, on the basis of estimates from a random-intercept model for three-level data that considers both balanced and unbalanced designs. These formulas also address some important results, such as the lower bounds of the needed units at the highest levels.

16. Back to basics: explaining sample size in outcome trials, are statisticians doing a thorough job?

Science.gov (United States)

Carroll, Kevin J

2009-01-01

Time to event outcome trials in clinical research are typically large, expensive and high-profile affairs. Such trials are commonplace in oncology and cardiovascular therapeutic areas but are also seen in other areas such as respiratory in indications like chronic obstructive pulmonary disease. Their progress is closely monitored and results are often eagerly awaited. Once available, the top line result is often big news, at least within the therapeutic area in which it was conducted, and the data are subsequently fully scrutinized in a series of high-profile publications. In such circumstances, the statistician has a vital role to play in the design, conduct, analysis and reporting of the trial. In particular, in drug development it is incumbent on the statistician to ensure at the outset that the sizing of the trial is fully appreciated by their medical, and other non-statistical, drug development team colleagues and that the risk of delivering a statistically significant but clinically unpersuasive result is minimized. The statistician also has a key role in advising the team when, early in the life of an outcomes trial, a lower than anticipated event rate appears to be emerging. This paper highlights some of the important features relating to outcome trial sample sizing and makes a number of simple recommendations aimed at ensuring a better, common understanding of the interplay between sample size and power and the final result required to provide a statistically positive and clinically persuasive outcome. Copyright (c) 2009 John Wiley & Sons, Ltd.

17. Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force

Energy Technology Data Exchange (ETDEWEB)

Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R

2008-05-22

We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.

18. Effect of sample size on the fluid flow through a single fractured granitoid

Directory of Open Access Journals (Sweden)

Kunal Kumar Singh

2016-06-01

Full Text Available Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stress–permeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, σ3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cylindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters, containing a “rough walled single fracture”. These experiments were performed under varied confining pressure (σ3 = 5–40 MPa, fluid pressure (fp ≤ 25 MPa, and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, σeff., and Q decreases with an increase in σeff.. Also, the effects of sample size and fracture roughness do not persist when σeff. ≥ 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory

19. Sample size and repeated measures required in studies of foods in the homes of African-American families.

Science.gov (United States)

Stevens, June; Bryant, Maria; Wang, Chin-Hua; Cai, Jianwen; Bentley, Margaret E

2012-06-01

Measurement of the home food environment is of interest to researchers because it affects food intake and is a feasible target for nutrition interventions. The objective of this study was to provide estimates to aid the calculation of sample size and number of repeated measures needed in studies of nutrients and foods in the home. We inventoried all foods in the homes of 80 African-American first-time mothers and determined 6 nutrient-related attributes. Sixty-three households were measured 3 times, 11 were measured twice, and 6 were measured once, producing 217 inventories collected at ~2-mo intervals. Following log transformations, number of foods, total energy, dietary fiber, and fat required only one measurement per household to achieve a correlation of 0.8 between the observed and true values. For percent energy from fat and energy density, 3 and 2 repeated measurements, respectively, were needed to achieve a correlation of 0.8. A sample size of 252 was needed to detect a difference of 25% of an SD in total energy with one measurement compared with 213 with 3 repeated measurements. Macronutrient characteristics of household foods appeared relatively stable over a 6-mo period and only 1 or 2 repeated measures of households may be sufficient for an efficient study design.

20. Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations

Directory of Open Access Journals (Sweden)

Guillermo Macbeth

2011-05-01

Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.

1. Optimizing Stream Water Mercury Sampling for Calculation of Fish Bioaccumulation Factors

Science.gov (United States)

Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive ...

2. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

Energy Technology Data Exchange (ETDEWEB)

Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

2012-02-15

Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and

3. Calculation of the effective diffusion coefficient during the drying of clay samples

Directory of Open Access Journals (Sweden)

Vasić Miloš

2012-01-01

Full Text Available The aim of this study was to calculate the effective diffusion coefficient based on experimentally recorded drying curves for two masonry clays obtained from different localities. The calculation method and two computer programs based on the mathematical calculation of the second Fick’s law and Cranck diffusion equation were developed. Masonry product shrinkage during drying was taken into consideration for the first time and the appropriate correction was entered into the calculation. The results presented in this paper show that the values of the effective diffusion coefficient determined by the designed computer programs (with and without the correction for shrinkage have similar values to those available in the literature for the same coefficient for different clays. Based on the mathematically determined prognostic value of the effective diffusion coefficient, it was concluded that, whatever the initial mineralogical composition of the clay, there is 90% agreement of the calculated prognostic drying curves with the experimentally recorded ones. When a shrinkage correction of the masonry products is introduced into the calculation step, this agreement is even better.

4. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

Science.gov (United States)

Li, Johnson Ching-Hong

2016-12-01

In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r* ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

5. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

Directory of Open Access Journals (Sweden)

Zhihua Wang

2014-01-01

Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

6. Dealing with varying detection probability, unequal sample sizes and clumped distributions in count data.

Directory of Open Access Journals (Sweden)

D Johan Kotze

Full Text Available Temporal variation in the detectability of a species can bias estimates of relative abundance if not handled correctly. For example, when effort varies in space and/or time it becomes necessary to take variation in detectability into account when data are analyzed. We demonstrate the importance of incorporating seasonality into the analysis of data with unequal sample sizes due to lost traps at a particular density of a species. A case study of count data was simulated using a spring-active carabid beetle. Traps were 'lost' randomly during high beetle activity in high abundance sites and during low beetle activity in low abundance sites. Five different models were fitted to datasets with different levels of loss. If sample sizes were unequal and a seasonality variable was not included in models that assumed the number of individuals was log-normally distributed, the models severely under- or overestimated the true effect size. Results did not improve when seasonality and number of trapping days were included in these models as offset terms, but only performed well when the response variable was specified as following a negative binomial distribution. Finally, if seasonal variation of a species is unknown, which is often the case, seasonality can be added as a free factor, resulting in well-performing negative binomial models. Based on these results we recommend (a add sampling effort (number of trapping days in our example to the models as an offset term, (b if precise information is available on seasonal variation in detectability of a study object, add seasonality to the models as an offset term; (c if information on seasonal variation in detectability is inadequate, add seasonality as a free factor; and (d specify the response variable of count data as following a negative binomial or over-dispersed Poisson distribution.

7. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

CERN Document Server

Gu, Xuejun; Li, Jinsheng; Jia, Xun; Jiang, Steve B

2011-01-01

Targeting at developing an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009]. Dosimetric evaluations against MCSIM Monte Carlo dose calculations are conducted on 10 IMRT treatment plans with heterogeneous treatment regions (5 head-and-neck cases and 5 lung cases). For head and neck cases, when cavities exist near the target, the improvement with the 3D-density correction over the conventional FSPB algorithm is significant. However, when there are high-density dental filling materials in beam paths, the improvement is small and the accuracy of the new algorithm is still unsatisfactory. On the other hand, significant improvement of dose calculation accuracy is observed in all lung cases. Especially when the target is in the m...

8. Evaluation of collapsed cone convolution superposition (CCCS algorithms in prowess treatment planning system for calculating symmetric and asymmetric field size

Directory of Open Access Journals (Sweden)

Tamer Dawod

2015-01-01

Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.

9. High-dimensional, massive sample-size Cox proportional hazards regression for survival analysis.

Science.gov (United States)

Mittal, Sushil; Madigan, David; Burd, Randall S; Suchard, Marc A

2014-04-01

Survival analysis endures as an old, yet active research field with applications that spread across many domains. Continuing improvements in data acquisition techniques pose constant challenges in applying existing survival analysis methods to these emerging data sets. In this paper, we present tools for fitting regularized Cox survival analysis models on high-dimensional, massive sample-size (HDMSS) data using a variant of the cyclic coordinate descent optimization technique tailored for the sparsity that HDMSS data often present. Experiments on two real data examples demonstrate that efficient analyses of HDMSS data using these tools result in improved predictive performance and calibration.

10. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

Energy Technology Data Exchange (ETDEWEB)

Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)

2010-01-15

In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

11. An approach for calculating a confidence interval from a single aquatic sample for monitoring hydrophobic organic contaminants.

Science.gov (United States)

Matzke, Melissa M; Allan, Sarah E; Anderson, Kim A; Waters, Katrina M

2012-12-01

The use of passive sampling devices (PSDs) for monitoring hydrophobic organic contaminants in aquatic environments can entail logistical constraints that often limit a comprehensive statistical sampling plan, thus resulting in a restricted number of samples. The present study demonstrates an approach for using the results of a pilot study designed to estimate sampling variability, which in turn can be used as variance estimates for confidence intervals for future n = 1 PSD samples of the same aquatic system. Sets of three to five PSDs were deployed in the Portland Harbor Superfund site for three sampling periods over the course of two years. The PSD filters were extracted and, as a composite sample, analyzed for 33 polycyclic aromatic hydrocarbon compounds. The between-sample and within-sample variances were calculated to characterize sources of variability in the environment and sampling methodology. A method for calculating a statistically reliable and defensible confidence interval for the mean of a single aquatic passive sampler observation (i.e., n = 1) using an estimate of sample variance derived from a pilot study is presented. Coverage probabilities are explored over a range of variance values using a Monte Carlo simulation. Copyright © 2012 SETAC.

12. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark

National Research Council Canada - National Science Library

Adrian Sayers; Michael J Crowther; Andrew Judge; Michael R Whitehouse; Ashley W Blom

2017-01-01

... to the performance benchmark of interest. We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking...

13. Nintendo Wii Fit as an adjunct to physiotherapy following lower limb fractures: preliminary feasibility, safety and sample size considerations.

Science.gov (United States)

McPhail, S M; O'Hara, M; Gane, E; Tonks, P; Bullock-Saxton, J; Kuys, S S

2016-06-01

14. Rates of brain atrophy and clinical decline over 6 and 12-month intervals in PSP: determining sample size for treatment trials.

Science.gov (United States)

Whitwell, Jennifer L; Xu, Jia; Mandrekar, Jay N; Gunter, Jeffrey L; Jack, Clifford R; Josephs, Keith A

2012-03-01

Imaging biomarkers are useful outcome measures in treatment trials. We compared sample size estimates for future treatment trials performed over 6 or 12-months in progressive supranuclear palsy using both imaging and clinical measures. We recruited 16 probable progressive supranuclear palsy patients that underwent baseline, 6 and 12-month brain scans, and 16 age-matched controls with serial scans. Disease severity was measured at each time-point using the progressive supranuclear palsy rating scale. Rates of ventricular expansion and rates of atrophy of the whole brain, superior frontal lobe, thalamus, caudate and midbrain were calculated. Rates of atrophy and clinical decline were used to calculate sample sizes required to power placebo-controlled treatment trials over 6 and 12-months. Rates of whole brain, thalamus and midbrain atrophy, and ventricular expansion, were increased over 6 and 12-months in progressive supranuclear palsy compared to controls. The progressive supranuclear palsy rating scale increased by 9 points over 6-months, and 18 points over 12-months. The smallest sample size estimates for treatment trials over 6-months were achieved using rate of midbrain atrophy, followed by rate of whole brain atrophy and ventricular expansion. Sample size estimates were further reduced over 12-month intervals. Sample size estimates for the progressive supranuclear palsy rating scale were worse than imaging measures over 6-months, but comparable over 12-months. Atrophy and clinical decline can be detected over 6-months in progressive supranuclear palsy. Sample size estimates suggest that treatment trials could be performed over this interval, with rate of midbrain atrophy providing the best outcome measure. Copyright © 2011 Elsevier Ltd. All rights reserved.

15. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

Science.gov (United States)

Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

2017-09-27

For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

16. Strategies for informed sample size reduction in adaptive controlled clinical trials

Science.gov (United States)

Arandjelović, Ognjen

2017-12-01

Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.

17. Power and sample size determination for measures of environmental impact in aquatic systems

Energy Technology Data Exchange (ETDEWEB)

Ammann, L.P. [Univ. of Texas, Richardson, TX (United States); Dickson, K.L.; Waller, W.T.; Kennedy, J.H. [Univ. of North Texas, Denton, TX (United States); Mayer, F.L.; Lewis, M. [Environmental Protection Agency, Gulf Breeze, FL (United States)

1994-12-31

To effectively monitor the status of various freshwater and estuarine ecological systems, it is necessary to understand the statistical power associated with the measures of ecological health that are appropriate for each system. These power functions can then be used to determine sample sizes that are required to attain targeted change detection likelihoods. A number of different measures have been proposed and are used for such monitoring. these include diversity and evenness indices, richness, and organisms counts. Power functions can be estimated when preliminary or historical data are available for the region and system of interest. Unfortunately, there are a number of problems associated with the computation of power functions and sample sizes for these measures. These problems include the presence of outliers, co-linearity among the variables, and non-normality of count data. The problems, and appropriate methods to compute the power functions, for each of the commonly employed measures of ecological health will be discussed. In addition, the relationship between power and the level of taxonomic classification used to compute the measures of diversity, evenness, richness, and organism counts will be discussed. Methods for computation of the power functions will be illustrated using data sets from previous EPA studies.

18. Calculation and optimization of sample identification by laser induced breakdown spectroscopy via correlation analysis

NARCIS (Netherlands)

Lentjes, M.; Dickmann, K.; Meijer, J.

2007-01-01

Linear correlation analysis may be used as a technique for the identification of samples with a very similar chemical composition by laser induced breakdown spectroscopy. The spectrum of the “unknown” sample is correlated with a library of reference spectra. The probability of identification by

19. Relationship between the size of the samples and the interpretation of the mercury intrusion results of an artificial sandstone

NARCIS (Netherlands)

Dong, H.; Zhang, H.; Zuo, Y.; Gao, P.; Ye, G.

2018-01-01

Mercury intrusion porosimetry (MIP) measurements are widely used to determine pore throat size distribution (PSD) curves of porous materials. The pore throat size of porous materials has been used to estimate their compressive strength and air permeability. However, the effect of sample size on

20. Platelet function investigation by flow cytometry: Sample volume, needle size, and reference intervals.

Science.gov (United States)

Pedersen, Oliver Heidmann; Nissen, Peter H; Hvas, Anne-Mette

2017-09-29

Flow cytometry is an increasingly used method for platelet function analysis because it has some important advantages compared with other platelet function tests. Flow cytometric platelet function analyses only require a small sample volume (3.5 mL); however, to expand the field of applications, e.g., for platelet function analysis in children, even smaller volumes are needed. Platelets are easily activated, and the size of the needle for blood sampling might be of importance for the pre-activation of the platelets. Moreover, to use flow cytometry for investigation of platelet function in clinical practice, a reference interval is warranted. The aims of this work were 1) to determine if small volumes of whole blood can be used without influencing the results, 2) to examine the pre-activation of platelets with respect to needle size, and 3) to establish reference intervals for flow cytometric platelet function assays. To examine the influence of sample volume, blood was collected from 20 healthy individuals in 1.0 mL, 1.8 mL, and 3.5 mL tubes. To examine the influence of the needle size on pre-activation, blood was drawn from another 13 healthy individuals with both a 19- and 21-gauge needle. For the reference interval study, 78 healthy adults were included. The flow cytometric analyses were performed on a NAVIOS flow cytometer (Beckman Coulter, Miami, Florida) investigating the following activation-dependent markers on the platelet surface; bound-fibrinogen, CD63, and P-selectin (CD62p) after activation with arachidonic acid, ristocetin, adenosine diphosphate, thrombin-receptor-activating-peptide, and collagen. The study showed that a blood volume as low as 1.0 mL can be used for platelet function analysis by flow cytometry and that both a 19- and 21-gauge needle can be used for blood sampling. In addition, reference intervals for platelet function analyses by flow cytometry were established.

1. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

Science.gov (United States)

Shieh, Gwowen; Jan, Show-Li

2013-01-01

The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

2. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

Science.gov (United States)

Wellek, Stefan

2017-02-28

In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

3. Efficient free energy calculations by combining two complementary tempering sampling methods

Science.gov (United States)

Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

2017-01-01

Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

4. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

Directory of Open Access Journals (Sweden)

Smedslund Geir

2013-02-01

Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

5. Hyperfine electric parameters calculation in Si samples implanted with {sup 57}Mn→{sup 57}Fe

Energy Technology Data Exchange (ETDEWEB)

Abreu, Y., E-mail: yabreu@ceaden.edu.cu [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Calle 30 No. 502 e/5ta y 7ma Ave., 11300 Miramar, Playa, La Habana (Cuba); Cruz, C.M.; Piñera, I.; Leyva, A.; Cabal, A.E. [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Calle 30 No. 502 e/5ta y 7ma Ave., 11300 Miramar, Playa, La Habana (Cuba); Van Espen, P. [Departement Chemie, Universiteit Antwerpen, Middelheimcampus, G.V.130, Groenenborgerlaan 171, 2020 Antwerpen (Belgium); Van Remortel, N. [Departement Fysica, Universiteit Antwerpen, Middelheimcampus, G.U.236, Groenenborgerlaan 171, 2020 Antwerpen (Belgium)

2014-07-15

Nowadays the electronic structure calculations allow the study of complex systems determining the hyperfine parameters measured at a probe atom, including the presence of crystalline defects. The hyperfine electric parameters have been measured by Mössbauer spectroscopy in silicon materials implanted with {sup 57}Mn→{sup 57}Fe ions, observing four main contributions to the spectra. Nevertheless, some ambiguities still remain in the {sup 57}Fe Mössbauer spectra interpretation in this case, regarding the damage configurations and its evolution with annealing. In the present work several implantation environments are evaluated and the {sup 57}Fe hyperfine parameters are calculated. The observed correlation among the studied local environments and the experimental observations is presented, and a tentative microscopic description of the behavior and thermal evolution of the characteristic defects local environments of the probe atoms concerning the location of vacancies and interstitial Si in the neighborhood of {sup 57}Fe ions in substitutional and interstitial sites is proposed.

6. Effects of sample size on estimation of rainfall extremes at high temperatures

Directory of Open Access Journals (Sweden)

B. Boessenkool

2017-09-01

Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

7. Impact of metric and sample size on determining malaria hotspot boundaries.

Science.gov (United States)

Stresman, Gillian H; Giorgi, Emanuele; Baidjoe, Amrish; Knight, Phil; Odongo, Wycliffe; Owaga, Chrispin; Shagari, Shehu; Makori, Euniah; Stevenson, Jennifer; Drakeley, Chris; Cox, Jonathan; Bousema, Teun; Diggle, Peter J

2017-04-12

The spatial heterogeneity of malaria suggests that interventions may be targeted for maximum impact. It is unclear to what extent different metrics lead to consistent delineation of hotspot boundaries. Using data from a large community-based malaria survey in the western Kenyan highlands, we assessed the agreement between a model-based geostatistical (MBG) approach to detect hotspots using Plasmodium falciparum parasite prevalence and serological evidence for exposure. Malaria transmission was widespread and highly heterogeneous with one third of the total population living in hotspots regardless of metric tested. Moderate agreement (Kappa = 0.424) was observed between hotspots defined based on parasite prevalence by polymerase chain reaction (PCR)- and the prevalence of antibodies to two P. falciparum antigens (MSP-1, AMA-1). While numerous biologically plausible hotspots were identified, their detection strongly relied on the proportion of the population sampled. When only 3% of the population was sampled, no PCR derived hotspots were reliably detected and at least 21% of the population was needed for reliable results. Similar results were observed for hotspots of seroprevalence. Hotspot boundaries are driven by the malaria diagnostic and sample size used to inform the model. These findings warn against the simplistic use of spatial analysis on available data to target malaria interventions in areas where hotspot boundaries are uncertain.

8. Effects of sample size on estimation of rainfall extremes at high temperatures

Science.gov (United States)

Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

2017-09-01

High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

9. What about N? A methodological study of sample-size reporting in focus group studies

Science.gov (United States)

2011-01-01

Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and

10. What about N? A methodological study of sample-size reporting in focus group studies.

Science.gov (United States)

Carlsen, Benedicte; Glenton, Claire

2011-03-11

Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these

11. What about N? A methodological study of sample-size reporting in focus group studies

Directory of Open Access Journals (Sweden)

Glenton Claire

2011-03-01

Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method

12. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

Science.gov (United States)

Mielke, Steven L; Truhlar, Donald G

2016-01-21

Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.

13. Simple and efficient way of speeding up transmission calculations with k-point sampling

DEFF Research Database (Denmark)

2015-01-01

functions. This is relevant for data obtained using typical "expensive" first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.......The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally "cheap" post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission...

14. Realistic weight perception and body size assessment in a racially diverse community sample of dieters.

Science.gov (United States)

Cachelin, F M; Striegel-Moore, R H; Elder, K A

1998-01-01

Recently, a shift in obesity treatment away from emphasizing ideal weight loss goals to establishing realistic weight loss goals has been proposed; yet, what constitutes "realistic" weight loss for different populations is not clear. This study examined notions of realistic shape and weight as well as body size assessment in a large community-based sample of African-American, Asian, Hispanic, and white men and women. Participants were 1893 survey respondents who were all dieters and primarily overweight. Groups were compared on various variables of body image assessment using silhouette ratings. No significant race differences were found in silhouette ratings, nor in perceptions of realistic shape or reasonable weight loss. Realistic shape and weight ratings by both women and men were smaller than current shape and weight but larger than ideal shape and weight ratings. Compared with male dieters, female dieters considered greater weight loss to be realistic. Implications of the findings for the treatment of obesity are discussed.

15. Some basic aspects of statistical methods and sample size determination in health science research.

Science.gov (United States)

Binu, V S; Mayya, Shreemathi S; Dhar, Murali

2014-04-01

A health science researcher may sometimes wonder "why statistical methods are so important in research?" Simple answer is that, statistical methods are used throughout a study that includes planning, designing, collecting data, analyzing and drawing meaningful interpretation and report the findings. Hence, it is important that a researcher knows the concepts of at least basic statistical methods used at various stages of a research study. This helps the researcher in the conduct of an appropriately well-designed study leading to valid and reliable results that can be generalized to the population. A well-designed study possesses fewer biases, which intern gives precise, valid and reliable results. There are many statistical methods and tests that are used at various stages of a research. In this communication, we discuss the overall importance of statistical considerations in medical research with the main emphasis on estimating minimum sample size for different study objectives.

16. Sample size and power for a stratified doubly randomized preference design.

Science.gov (United States)

Cameron, Briana; Esserman, Denise A

2016-11-21

The two-stage (or doubly) randomized preference trial design is an important tool for researchers seeking to disentangle the role of patient treatment preference on treatment response through estimation of selection and preference effects. Up until now, these designs have been limited by their assumption of equal preference rates and effect sizes across the entire study population. We propose a stratified two-stage randomized trial design that addresses this limitation. We begin by deriving stratified test statistics for the treatment, preference, and selection effects. Next, we develop a sample size formula for the number of patients required to detect each effect. The properties of the model and the efficiency of the design are established using a series of simulation studies. We demonstrate the applicability of the design using a study of Hepatitis C treatment modality, specialty clinic versus mobile medical clinic. In this example, a stratified preference design (stratified by alcohol/drug use) may more closely capture the true distribution of patient preferences and allow for a more efficient design than a design which ignores these differences (unstratified version). © The Author(s) 2016.

17. Estimating effective population size from temporally spaced samples with a novel, efficient maximum-likelihood algorithm.

Science.gov (United States)

Hui, Tin-Yu J; Burt, Austin

2015-05-01

The effective population size [Formula: see text] is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating [Formula: see text] have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator [Formula: see text] for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying [Formula: see text] is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator [Formula: see text], and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of [Formula: see text] to several million, hence allowing the estimation of larger [Formula: see text]. Finally, we demonstrate how this algorithm can cope with nonconstant [Formula: see text] scenarios and be used as a likelihood-ratio test to test for the equality of [Formula: see text] throughout the sampling horizon. An R package "NB" is now available for download to implement the method described in this article. Copyright © 2015 by the Genetics Society of America.

18. Sediment grain size estimation using airborne remote sensing, field sampling, and robust statistic.

Science.gov (United States)

Castillo, Elena; Pereda, Raúl; Luis, Julio Manuel de; Medina, Raúl; Viguri, Javier

2011-10-01

Remote sensing has been used since the 1980s to study parameters in relation with coastal zones. It was not until the beginning of the twenty-first century that it started to acquire imagery with good temporal and spectral resolution. This has encouraged the development of reliable imagery acquisition systems that consider remote sensing as a water management tool. Nevertheless, the spatial resolution that it provides is not adapted to carry out coastal studies. This article introduces a new methodology for estimating the most fundamental physical property of intertidal sediment, the grain size, in coastal zones. The study combines hyperspectral information (CASI-2 flight), robust statistic, and simultaneous field work (chemical and radiometric sampling), performed over Santander Bay, Spain. Field data acquisition was used to build a spectral library in order to study different atmospheric correction algorithms for CASI-2 data and to develop algorithms to estimate grain size in an estuary. Two robust estimation techniques (MVE and MCD multivariate M-estimators of location and scale) were applied to CASI-2 imagery, and the results showed that robust adjustments give acceptable and meaningful algorithms. These adjustments have given the following R(2) estimated results: 0.93 in the case of sandy loam contribution, 0.94 for the silty loam, and 0.67 for clay loam. The robust statistic is a powerful tool for large dataset.

19. The risk of bias and sample size of trials of spinal manipulative therapy for low back and neck pain: analysis and recommendations.

Science.gov (United States)

Rubinstein, Sidney M; van Eekelen, Rik; Oosterhuis, Teddy; de Boer, Michiel R; Ostelo, Raymond W J G; van Tulder, Maurits W

2014-10-01

The purpose of this study was to evaluate changes in methodological quality and sample size in randomized controlled trials (RCTs) of spinal manipulative therapy (SMT) for neck and low back pain over a specified period. A secondary purpose was to make recommendations for improvement for future SMT trials based upon our findings. Randomized controlled trials that examined the effect of SMT in adults with neck and/or low back pain and reported at least 1 patient-reported outcome measure were included. Studies were identified from recent Cochrane reviews of SMT, and an update of the literature was conducted (March 2013). Risk of bias was assessed using the 12-item criteria recommended by the Cochrane Back Review Group. In addition, sample size was examined. The relationship between the overall risk of bias and sample size over time was evaluated using regression analyses, and RCTs were grouped into periods (epochs) of approximately 5 years. In total, 105 RCTs were included, of which 41 (39%) were considered to have a low risk of bias. There is significant improvement in the mean risk of bias over time (P < .05), which is the most profound for items related to selection bias and, to a lesser extent, attrition and selective outcome reporting bias. Furthermore, although there is no significant increase in sample size over time (overall P = .8), the proportion of studies that performed an a priori sample size calculation is increasing statistically (odds ratio, 2.1; confidence interval, 1.5-3.0). Sensitivity analyses suggest no appreciable difference between studies for neck or low back pain for risk of bias or sample size. Methodological quality of RCTs of SMT for neck and low back pain is improving, whereas overall sample size has shown only small and nonsignificant increases. There is an increasing trend among studies to conduct sample size calculations, which relate to statistical power. Based upon these findings, 7 areas of improvement for future SMT trials are

20. Sampling returns for realized variance calculations: tick time or transaction time?

NARCIS (Netherlands)

Griffin, J.E.; Oomen, R.C.A.

2008-01-01

This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are

1. Calculation of gamma-ray mass attenuation coefficients of some Egyptian soil samples using Monte Carlo methods

Science.gov (United States)

Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan

2014-08-01

Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.

2. Sample size matters in dietary gene expression studies—A case study in the gilthead sea bream (Sparus aurata L.

Directory of Open Access Journals (Sweden)

Fotini Kokou

2016-05-01

Full Text Available One of the main concerns in gene expression studies is the calculation of statistical significance which in most cases remains low due to limited sample size. Increasing biological replicates translates into more effective gains in power which, especially in nutritional experiments, is of great importance as individual variation of growth performance parameters and feed conversion is high. The present study investigates in the gilthead sea bream Sparus aurata, one of the most important Mediterranean aquaculture species. For 24 gilthead sea bream individuals (biological replicates the effects of gradual substitution of fish meal by plant ingredients (0% (control, 25%, 50% and 75% in the diets were studied by looking at expression levels of four immune-and stress-related genes in intestine, head kidney and liver. The present results showed that only the lowest substitution percentage is tolerated and that liver is the most sensitive tissue to detect gene expression variations in relation to fish meal substituted diets. Additionally the usage of three independent biological replicates were evaluated by calculating the averages of all possible triplets in order to assess the suitability of selected genes for stress indication as well as the impact of the experimental set up, thus in the present work the impact of FM substitution. Gene expression was altered depending of the selected biological triplicate. Only for two genes in liver (hsp70 and tgf significant differential expression was assured independently of the triplicates used. These results underlined the importance of choosing the adequate sample number especially when significant, but minor differences in gene expression levels are observed.

3. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

Directory of Open Access Journals (Sweden)

Shaukat S. Shahid

2016-06-01

Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

4. Sample size and number of outcome measures of veterinary randomised controlled trials of pharmaceutical interventions funded by different sources, a cross-sectional study.

Science.gov (United States)

Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S

2017-10-04

Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.

5. Study on the Application of the Combination of TMD Simulation and Umbrella Sampling in PMF Calculation for Molecular Conformational Transitions

Directory of Open Access Journals (Sweden)

Qing Wang

2016-05-01

Full Text Available Free energy calculations of the potential of mean force (PMF based on the combination of targeted molecular dynamics (TMD simulations and umbrella samplings as a function of physical coordinates have been applied to explore the detailed pathways and the corresponding free energy profiles for the conformational transition processes of the butane molecule and the 35-residue villin headpiece subdomain (HP35. The accurate PMF profiles for describing the dihedral rotation of butane under both coordinates of dihedral rotation and root mean square deviation (RMSD variation were obtained based on the different umbrella samplings from the same TMD simulations. The initial structures for the umbrella samplings can be conveniently selected from the TMD trajectories. For the application of this computational method in the unfolding process of the HP35 protein, the PMF calculation along with the coordinate of the radius of gyration (Rg presents the gradual increase of free energies by about 1 kcal/mol with the energy fluctuations. The feature of conformational transition for the unfolding process of the HP35 protein shows that the spherical structure extends and the middle α-helix unfolds firstly, followed by the unfolding of other α-helices. The computational method for the PMF calculations based on the combination of TMD simulations and umbrella samplings provided a valuable strategy in investigating detailed conformational transition pathways for other allosteric processes.

6. A numerical simulation method for calculation of linear attenuation coefficients of unidentified sample materials in routine gamma ray spectrometry

Directory of Open Access Journals (Sweden)

2015-01-01

Full Text Available When using gamma ray spectrometry for radioactivity analysis of environmental samples (such as soil, sediment or ash of a living organism, relevant linear attenuation coefficients should be known - in order to calculate self-absorption in the sample bulk. This parameter is additionally important since the unidentified samples are normally different in composition and density from the reference ones (the latter being e. g. liquid sources, commonly used for detection efficiency calibration in radioactivity monitoring. This work aims at introducing a numerical simulation method for calculation of linear attenuation coefficients without the use of a collimator. The method is primarily based on calculations of the effective solid angles - compound parameters accounting for the emission and detection probabilities, as well as for the source-to-detector geometrical configuration. The efficiency transfer principle and average path lengths through the samples themselves are employed, too. The results obtained are compared with those from the NIST-XCOM data base; close agreement confirms the validity of the numerical simulation method approach.

7. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

Directory of Open Access Journals (Sweden)

You-xin Shen

Full Text Available A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1 Does this conventional sampling strategy limit the detection of seeds of woody species? 2 Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length × 10 cm (width × 10 cm (depth, referred to as larger number of small-sized samples (LNSS in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS and placed them (10 each in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.

8. Design and relevant sample calculations for a neutral particle energy diagnostic based on time of flight

Energy Technology Data Exchange (ETDEWEB)

Cecconello, M

1999-05-01

Extrap T2 will be equipped with a neutral particles energy diagnostic based on time of flight technique. In this report, the expected neutral fluxes for Extrap T2 are estimated and discussed in order to determine the feasibility and the limits of such diagnostic. These estimates are based on a 1D model of the plasma. The input parameters of such model are the density and temperature radial profiles of electrons and ions and the density of neutrals at the edge and in the centre of the plasma. The atomic processes included in the model are the charge-exchange and the electron-impact ionization processes. The results indicate that the plasma attenuation length varies from a/5 to a, a being the minor radius. Differential neutral fluxes, as well as the estimated power losses due to CX processes (2 % of the input power), are in agreement with experimental results obtained in similar devices. The expected impurity influxes vary from 10{sup 14} to 10{sup 11} cm{sup -2}s{sup -1}. The neutral particles detection and acquisition systems are discussed. The maximum detectable energy varies from 1 to 3 keV depending on the flight distances d. The time resolution is 0.5 ms. Output signals from the waveform recorder are foreseen in the range 0-200 mV. An 8-bit waveform recorder having 2 MHz sampling frequency and 100K sample of memory capacity is the minimum requirement for the acquisition system 20 refs, 19 figs.

9. Effects of Sample Size and Full Sibs on Genetic Diversity Characterization: A Case Study of Three Syntopic Iberian Pond-Breeding Amphibians.

Science.gov (United States)

Sánchez-Montes, Gregorio; Ariño, Arturo H; Vizmanos, José L; Wang, Jinliang; Martínez-Solano, Íñigo

2017-07-01

Accurate characterization of genetic diversity is essential for understanding population demography, predicting future trends and implementing efficient conservation policies. For that purpose, molecular markers are routinely developed for nonmodel species, but key questions regarding sampling design, such as calculation of minimum sample sizes or the effect of relatives in the sample, are often neglected. We used accumulation curves and sibship analyses to explore how these 2 factors affect marker performance in the characterization of genetic diversity. We illustrate this approach with the analysis of an empirical dataset including newly optimized microsatellite sets for 3 Iberian amphibian species: Hyla molleri, Epidalea calamita, and Pelophylax perezi. We studied 17-21 populations per species (total n = 547, 652, and 516 individuals, respectively), including a reference locality in which the effect of sample size was explored using larger samples (77-96 individuals). As expected, FIS and tests for Hardy-Weinberg equilibrium and linkage disequilibrium were affected by the presence of full sibs, and most initially inferred disequilibria were no longer statistically significant when full siblings were removed from the sample. We estimated that to obtain reliable estimates, the minimum sample size (potentially including full sibs) was close to 20 for expected heterozygosity, and between 50 and 80 for allelic richness. Our pilot study based on a reference population provided a rigorous assessment of marker properties and the effects of sample size and presence of full sibs in the sample. These examples illustrate the advantages of this approach to produce robust and reliable results for downstream analyses. © The American Genetic Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

10. Calculating the free energy of transfer of small solutes into a model lipid membrane: Comparison between metadynamics and umbrella sampling

Science.gov (United States)

Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo; Monticelli, Luca; Rossi, Giulia

2015-10-01

We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.

11. Calculating the free energy of transfer of small solutes into a model lipid membrane: Comparison between metadynamics and umbrella sampling

Energy Technology Data Exchange (ETDEWEB)

Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo; Rossi, Giulia, E-mail: giulia.rossi@gmail.com [Physics Department, University of Genoa and CNR-IMEM, Via Dodecaneso 33, 16146 Genoa (Italy); Monticelli, Luca [Bases Moléculaires et Structurales des Systèmes Infectieux (BMSSI), CNRS UMR 5086, 7 Passage du Vercors, 69007 Lyon (France)

2015-10-14

We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.

12. Is more research always needed? Estimating optimal sample sizes for trials of retention in care interventions for HIV-positive East Africans.

Science.gov (United States)

Uyei, Jennifer; Li, Lingfeng; Braithwaite, R Scott

2017-01-01

Given the serious health consequences of discontinuing antiretroviral therapy, randomised control trials of interventions to improve retention in care may be warranted. As funding for global HIV research is finite, it may be argued that choices about sample size should be tied to maximising health. For an East African setting, we calculated expected value of sample information and expected net benefit of sampling to identify the optimal sample size (greatest return on investment) and to quantify net health gains associated with research. Two hypothetical interventions were analysed: (1) one aimed at reducing disengagement from HIV care and (2) another aimed at finding/relinking disengaged patients. When the willingness to pay (WTP) threshold was within a plausible range (1-3 × GDP; US$1377-4130/QALY), the optimal sample size was zero for both interventions, meaning that no further research was recommended because the pre-research probability of an intervention's effectiveness and value was sufficient to support a decision on whether to adopt the intervention and any new information gained from additional research would likely not change that decision. In threshold analyses, at a higher WTP of$5200 the optimal sample size for testing a risk reduction intervention was 2750 per arm. For the outreach intervention, the optimal sample size remained zero across a wide range of WTP thresholds and was insensitive to variation. Limitations, including not varying all inputs in the model, may have led to an underestimation of the value of investing in new research. In summary, more research is not always needed, particularly when there is moderately robust prestudy belief about intervention effectiveness and little uncertainty about the value (cost-effectiveness) of the intervention. Users can test their own assumptions at http://torchresearch.org.

13. Particle size distribution and chemical composition of total mixed rations for dairy cattle: water addition and feed sampling effects.

Science.gov (United States)

Arzola-Alvarez, C; Bocanegra-Viezca, J A; Murphy, M R; Salinas-Chavira, J; Corral-Luna, A; Romanos, A; Ruíz-Barrera, O; Rodríguez-Muela, C

2010-09-01

Four dairy farms were used to determine the effects of water addition to diets and sample collection location on the particle size distribution and chemical composition of total mixed rations (TMR). Samples were collected weekly from the mixing wagon and from 3 locations in the feed bunk (top, middle, and bottom) for 5 mo (April, May, July, August, and October). Samples were partially dried to determine the effect of moisture on particle size distribution. Particle size distribution was measured using the Penn State Particle Size Separator. Crude protein, neutral detergent fiber, and acid detergent fiber contents were also analyzed. Particle fractions 19 to 8, 8 to 1.18, and 19 mm was greater than recommended for TMR, according to the guidelines of Cooperative Extension of Pennsylvania State University. The particle size distribution in April differed from that in October, but intermediate months (May, July, and August) had similar particle size distributions. Samples from the bottom of the feed bunk had the highest percentage of particles retained on the 19-mm sieve. Samples from the top and middle of the feed bunk were similar to that from the mixing wagon. Higher percentages of particles were retained on >19, 19 to 8, and 8 to 1.18 mm sieves for wet than dried samples. The reverse was found for particles passing the 1.18-mm sieve. Mean particle size was higher for wet than dried samples. The crude protein, neutral detergent fiber, and acid detergent fiber contents of TMR varied with month of sampling (18-21, 40-57, and 21-34%, respectively) but were within recommended ranges for high-yielding dairy cows. Analyses of TMR particle size distributions are useful for proper feed bunk management and formulation of diets that maintain rumen function and maximize milk production and quality. Water addition may help reduce dust associated with feeding TMR. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

14. Calculating and reporting effect sizes on scientific papers (1: p < 0.05 limitations in the analysis of mean differences of two groups

Directory of Open Access Journals (Sweden)

Helena Espirito Santo

2015-02-01

Since p-values from the results of the statistical tests do not indicate the magnitude or importance of a difference, then effect sizes (ES should reported. In fact, ES give meaning to statistical tests; emphasize the power of statistical tests; reduce the risk of interpret mere sampling variation as real relationship; can increase the reporting of “non-significant"results, and allow the accumulation of knowledge from several studies using meta-analysis. Thus, the objectives of this paper are to present the limits of the significance level; describe the foundations of presentation of ES of statistical tests to analyze differences between two groups; present the formulas to calculate directly ES, providing examples of our own previous studies; show how to calculate confidence intervals; provide the conversion formulas for the review of the literature; indicate how to interpret the ES; and show that, although interpretable, the meaning (small, medium or large effect for an arbitrary metric could be inaccurate, requiring that interpretation should be made in the context of the research area and in the context of real world variables.

15. Bayesian adaptive determination of the sample size required to assure acceptably low adverse event risk.

Science.gov (United States)

Lawrence Gould, A; Zhang, Xiaohua Douglas

2014-03-15

An emerging concern with new therapeutic agents, especially treatments for type 2 diabetes, a prevalent condition that increases an individual's risk of heart attack or stroke, is the likelihood of adverse events, especially cardiovascular events, that the new agents may cause. These concerns have led to regulatory requirements for demonstrating that a new agent increases the risk of an adverse event relative to a control by no more than, say, 30% or 80% with high (e.g., 97.5%) confidence. We describe a Bayesian adaptive procedure for determining if the sample size for a development program needs to be increased and, if necessary, by how much, to provide the required assurance of limited risk. The decision is based on the predictive likelihood of a sufficiently high posterior probability that the relative risk is no more than a specified bound. Allowance can be made for between-center as well as within-center variability to accommodate large-scale developmental programs, and design alternatives (e.g., many small centers, few large centers) for obtaining additional data if needed can be explored. Binomial or Poisson likelihoods can be used, and center-level covariates can be accommodated. The predictive likelihoods are explored under various conditions to assess the statistical properties of the method. Copyright © 2013 John Wiley & Sons, Ltd.

16. Sampling design and required sample size for evaluating contamination levels of 137Cs in Japanese fir needles in a mixed deciduous forest stand in Fukushima, Japan.

Science.gov (United States)

2017-05-01

We estimated the sample size (the number of samples) required to evaluate the concentration of radiocesium (137Cs) in Japanese fir (Abies firma Sieb. & Zucc.), 5 years after the outbreak of the Fukushima Daiichi Nuclear Power Plant accident. We investigated the spatial structure of the contamination levels in this species growing in a mixed deciduous broadleaf and evergreen coniferous forest stand. We sampled 40 saplings with a tree height of 150 cm-250 cm in a Fukushima forest community. The results showed that: (1) there was no correlation between the 137Cs concentration in needles and soil, and (2) the difference in the spatial distribution pattern of 137Cs concentration between needles and soil suggest that the contribution of root uptake to 137Cs in new needles of this species may be minor in the 5 years after the radionuclides were released into the atmosphere. The concentration of 137Cs in needles showed a strong positive spatial autocorrelation in the distance class from 0 to 2.5 m, suggesting that the statistical analysis of data should consider spatial autocorrelation in the case of an assessment of the radioactive contamination of forest trees. According to our sample size analysis, a sample size of seven trees was required to determine the mean contamination level within an error in the means of no more than 10%. This required sample size may be feasible for most sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

17. Power and sample size for the S:T repeated measures design combined with a linear mixed-effects model allowing for missing data.

Science.gov (United States)

Tango, Toshiro

2017-02-13

Tango (Biostatistics 2016) proposed a new repeated measures design called the S:T repeated measures design, combined with generalized linear mixed-effects models and sample size calculations for a test of the average treatment effect that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size compared with the simple pre-post design. In this article, we present formulas for calculating power and sample sizes for a test of the average treatment effect allowing for missing data within the framework of the S:T repeated measures design with a continuous response variable combined with a linear mixed-effects model. Examples are provided to illustrate the use of these formulas.

18. Conditional and Unconditional Tests (and Sample Size Based on Multiple Comparisons for Stratified 2 × 2 Tables

Directory of Open Access Journals (Sweden)

A. Martín Andrés

2015-01-01

Full Text Available The Mantel-Haenszel test is the most frequent asymptotic test used for analyzing stratified 2 × 2 tables. Its exact alternative is the test of Birch, which has recently been reconsidered by Jung. Both tests have a conditional origin: Pearson’s chi-squared test and Fisher’s exact test, respectively. But both tests have the same drawback that the result of global test (the stratified test may not be compatible with the result of individual tests (the test for each stratum. In this paper, we propose to carry out the global test using a multiple comparisons method (MC method which does not have this disadvantage. By refining the method (MCB method an alternative to the Mantel-Haenszel and Birch tests may be obtained. The new MC and MCB methods have the advantage that they may be applied from an unconditional view, a methodology which until now has not been applied to this problem. We also propose some sample size calculation methods.

19. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

Energy Technology Data Exchange (ETDEWEB)

Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

2011-07-01

A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

20. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

Energy Technology Data Exchange (ETDEWEB)

Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

2014-07-15

Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to

1. Calculating structural and geometrical parameters by laboratory experiments and X-Ray microtomography: a comparative study applied to a limestone sample

Science.gov (United States)

Luquot, L.; Hebert, V.; Rodriguez, O.

2015-11-01

The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT) images and laboratory experiments. Total and effective porosity, surface-to-volume ratio, pore size distribution, permeability, tortuosity and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here have been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occured during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory one. We observed that pore size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we wrapped up that computing XMT images to determine transport, geometrical and petrophysical parameters provides similar results than the one measured at the laboratory but with much shorter durations.

2. Calculating structural and geometrical parameters by laboratory measurements and X-ray microtomography: a comparative study applied to a limestone sample before and after a dissolution experiment

Science.gov (United States)

Luquot, Linda; Hebert, Vanessa; Rodriguez, Olivier

2016-03-01

The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT) images and laboratory experiments. Total and effective porosity, pore-size distribution, tortuosity, and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here has been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occurred during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows us to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory measurement. We observed that pore-size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we concluded that computing XMT images to determine transport, geometrical, and petrophysical parameters provide similar results to those measured at the laboratory but with much shorter durations.

3. Synthesis, characterization, nano-sized binuclear nickel complexes, DFT calculations and antibacterial evaluation of new macrocyclic Schiff base compounds

Science.gov (United States)

2017-06-01

Some new macrocyclic bridged dianilines tetradentate with N4coordination sphere Schiff base ligands and their nickel(II)complexes with general formula [{Ni2LCl4} where L = (C20H14N2X)2, X = SO2, O, CH2] have been synthesized. The compounds have been characterized by FT-IR, 1H and 13C NMR, mass spectroscopy, TGA, elemental analysis, molar conductivity and magnetic moment techniques. Scanning electron microscopy (SEM) shows nano-sized structures under 100 nm for nickel (II) complexes. NiO nanoparticle was achieved via the thermal decomposition method and analyzed by FT-IR, SEM and X-ray powder diffraction which indicates closeaccordance to standard pattern of NiO nanoparticle. All the Schiff bases and their complexes have been detected in vitro both for antibacterial activity against two gram-negative and two gram-positive bacteria. The nickel(II) complexes were found to be more active than the free macrocycle Schiff bases. In addition, computational studies of three ligands have been carried out at the DFT-B3LYP/6-31G+(d,p) level of theory on the spectroscopic properties, including IR, 1HNMR and 13CNMR spectroscopy. The correlation between the theoretical and the experimental vibrational frequencies, 1H NMR and 13C NMR of the ligands were 0.999, 0.930-0.973 and 0.917-0.995, respectively. Also, the energy gap was determined and by using HOMO and LUMO energy values, chemical hardness-softness, electronegativity and electrophilic index were calculated.

4. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

DEFF Research Database (Denmark)

Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

2008-01-01

The proportionator is a novel and radically different approach to sampling with microscopes based on well-known statistical theory (probability proportional to size - PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section a ...

5. Absolute binding free energy calculations of CBClip host-guest systems in the SAMPL5 blind challenge.

Science.gov (United States)

Lee, Juyong; Tofoleanu, Florentina; Pickard, Frank C; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R

2017-01-01

Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett's acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments.

6. Field sampling of loose erodible material: A new system to consider the full particle-size spectrum

Science.gov (United States)

Klose, Martina; Gill, Thomas E.; Webb, Nicholas P.; Van Zee, Justin W.

2017-10-01

A new system is presented to sample and enable the characterization of loose erodible material (LEM) present on a soil surface, which may be susceptible for entrainment by wind. The system uses a modified MWAC (Modified Wilson and Cooke) sediment sampler connected to a corded hand-held vacuum cleaner. Performance and accuracy of the system was tested in the laboratory using five reference soil samples with different textures. Sampling was most effective for sandy soils, while effectiveness decreases were found for soils with high silt and clay contents in dry dispersion. This effectiveness decrease can be attributed to loose silt and clay-sized particles and particle aggregates adhering to and clogging a filter attached to the MWAC outlet. Overall, the system was found to be effective in collecting sediment for most soil textures and theoretical interpretation of the measured flow speeds suggests that LEM can be sampled for a wide range of particle sizes, including dust particles. Particle-size analysis revealed that the new system is able to accurately capture the particle-size distribution (PSD) of a given sample. Only small discrepancies (maximum cumulative difference vacuuming for all test soils. Despite limitations of the system, it is an advance toward sampling the full particle-size spectrum of loose sediment available for entrainment with the overall goal to better understand the mechanisms of dust emission and their variability.

7. Grain size of loess and paleosol samples: what are we measuring?

Science.gov (United States)

Varga, György; Kovács, János; Szalai, Zoltán; Újvári, Gábor

2017-04-01

Particle size falling into a particularly narrow range is among the most important properties of windblown mineral dust deposits. Therefore, various aspects of aeolian sedimentation and post-depositional alterations can be reconstructed only from precise grain size data. Present study is aimed at (1) reviewing grain size data obtained from different measurements, (2) discussing the major reasons for disagreements between data obtained by frequently applied particle sizing techniques, and (3) assesses the importance of particle shape in particle sizing. Grain size data of terrestrial aeolian dust deposits (loess and paleosoil) were determined by laser scattering instruments (Fritsch Analysette 22 Microtec Plus, Horiba Partica La-950 v2 and Malvern Mastersizer 3000 with a Hydro Lv unit), while particles size and shape distributions were acquired by Malvern Morphologi G3-ID. Laser scattering results reveal that the optical parameter settings of the measurements have significant effects on the grain size distributions, especially for the fine-grained fractions (camera. However, this is only one outcome of infinite possible projections of a three-dimensional object and it cannot be regarded as a representative one. The third (height) dimension of the particles remains unknown, so the volume-based weightings are fairly dubious in the case of platy particles. Support of the National Research, Development and Innovation Office (Hungary) under contract NKFI 120620 is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.

8. Applying Individual Tree Structure From Lidar to Address the Sensitivity of Allometric Equations to Small Sample Sizes.

Science.gov (United States)

Duncanson, L.; Dubayah, R.

2015-12-01

Lidar remote sensing is widely applied for mapping forest carbon stocks, and technological advances have improved our ability to capture structural details from forests, even resolving individual trees. Despite these advancements, the accuracy of forest aboveground biomass models remains limited by the quality of field estimates of biomass. The accuracies of field estimates are inherently dependent on the accuracy of the allometric equations used to relate measurable attributes to biomass. These equations are calibrated with relatively small samples of often spatially clustered trees. This research focuses on one of many issues involving allometric equations - understanding how sensitive allometric parameters are to the sample sizes used to fit them. We capitalize on recent advances in lidar remote sensing to extract individual tree structural information from six high-resolution airborne lidar datasets in the United States. We remotely measure millions of tree heights and crown radii, and fit allometric equations to the relationship between tree height and radius at a 'population' level, in each site. We then extract samples from our tree database, and build allometries on these smaller samples of trees, with varying sample sizes. We show that for the allometric relationship between tree height and crown radius, small sample sizes produce biased allometric equations that overestimate height for a given crown radius. We extend this analysis using translations from the literature to address potential implications for biomass, showing that site-level biomass may be greatly overestimated when applying allometric equations developed with the typically small sample sizes used in popular allometric equations for biomass.

9. Preliminary calculational analysis of the actinide samples from FP-4 exposed in the Dounreay Prototype Fast Reactor

Energy Technology Data Exchange (ETDEWEB)

Murphy, B.D.; Raman, S. [Oak Ridge National Lab., TN (United States); Newton, T.D. [AEA Technology, Winfrith (United Kingdom)

1996-12-01

This report discusses the current status of results from an extensive experiment on the irradiation of selected actinides in a fast reactor. These actinides ranged from thorium to curium. They were irradiated in the core of the Dounreay Prototype Fast Reactor. Rates for depletion, transmutation, and fission-product generation were experimentally measured, and, in turn, were calculated using current cross-section and fission-yield data. Much of the emphasis is on the comparison between experimental and calculated values for both actinide and fission-product concentrations. Some of the discussion touches on the adequacy of current cross-section and fission-yield data. However, the main purposes of the report are: to collect in one place the most recent yield data, to discuss the comparisons between the experimental and calculated results, to discuss each sample that was irradiated giving details of any adjustments needed or specific problems encountered, and to give a chronology of the analysis as it pertained to the set of samples (referred to as FP-4 samples) that constitutes the most extensively irradiated and final set. The results and trends reported here, together with those discussions touching on current knowledge about cross sections and fission yields, are intended to serve as a starting point for further analysis. In general, these results are encouraging with regard to the adequacy of much of the currently available nuclear data in this region of the periodic table. But there are some cases where adjustments and improvements can be suggested. However, the application of these results in consolidating current cross-section and fission-yield data must await further analysis.

10. Effects of the sample size of reference population on determining BMD reference curve and peak BMD and diagnosing osteoporosis.

Science.gov (United States)

Hou, Y-L; Liao, E-Y; Wu, X-P; Peng, Y-Q; Zhang, H; Dai, R-C; Luo, X-H; Cao, X-Z

2008-01-01

Establishing reference databases generally requires a large sample size to achieve reliable results. Our study revealed that the varying sample size from hundreds to thousands of individuals has no decisive effect on the bone mineral density (BMD) reference curve, peak BMD, and diagnosing osteoporosis. It provides a reference point for determining the sample size while establishing local BMD reference databases. This study attempts to determine a suitable sample size for establishing bone mineral density (BMD) reference databases in a local laboratory. The total reference population consisted of 3,662 Chinese females aged 6-85 years. BMDs were measured with a dual-energy X-ray absorptiometry densitometer. The subjects were randomly divided into four different sample groups, that is, total number (Tn) = 3,662, 1/2n = 1,831, 1/4n = 916, and 1/8n = 458. We used the best regression model to determine BMD reference curve and peak BMD. There was no significant difference in the full curves between the four sample groups at each skeletal site, although some discrepancy at the end of the curves was observed at the spine. Peak BMDs were very similar in the four sample groups. According to the Chinese diagnostic criteria (BMD >25% below the peak BMD as osteoporosis), no difference was observed in the osteoporosis detection rate using the reference values determined by the four different sample groups. Varying the sample size from hundreds to thousands has no decisive effect on establishing BMD reference curve and determining peak BMD. It should be practical for determining the reference population while establishing local BMD databases.

11. A comparison of surface doses for very small field size x-ray beams: Monte Carlo calculations and radiochromic film measurements.

Science.gov (United States)

Morales, J E; Hill, R; Crowe, S B; Kairn, T; Trapp, J V

2014-06-01

Stereotactic radiosurgery treatments involve the delivery of very high doses for a small number of fractions. To date, there is limited data in terms of the skin dose for the very small field sizes used in these treatments. In this work, we determine relative surface doses for small size circular collimators as used in stereotactic radiosurgery treatments. Monte Carlo calculations were performed using the BEAMnrc code with a model of the Novalis Trilogy linear accelerator and the BrainLab circular collimators. The surface doses were calculated at the ICRP skin dose depth of 70 μm all using the 6 MV SRS x-ray beam. The calculated surface doses varied between 15 and 12 % with decreasing values as the field size increased from 4 to 30 mm. In comparison, surface doses were measured using Gafchromic EBT3 film positioned at the surface of a Virtual Water phantom. The absolute agreement between calculated and measured surface doses was better than 2.0 % which is well within the uncertainties of the Monte Carlo calculations and the film measurements. Based on these results, we have shown that the Gafchromic EBT3 film is suitable for surface dose estimates in very small size fields as used in SRS.

12. A solution for an inverse problem in liquid AFM: calculation of three-dimensional solvation structure on a sample surface

CERN Document Server

Amano, Ken-ich

2013-01-01

Recent frequency-modulated atomic force microscopy (FM-AFM) can measure three-dimensional force distribution between a probe and a sample surface in liquid. The force distribution is, in the present circumstances, assumed to be solvation structure on the sample surface, because the force distribution and solvation structure have somewhat similar shape. However, the force distribution is exactly not the solvation structure. If we would like to obtain the solvation structure by using the liquid AFM, a method for transforming the force distribution into the solvation structure is necessary. Therefore, in this letter, we present the transforming method in a brief style. We call this method as a solution for an inverse problem, because the solvation structure is obtained at first and the force distribution is obtained next in general calculation processes. The method is formulated (mainly) by statistical mechanics of liquid.

13. RISK-ASSESSMENT PROCEDURES AND ESTABLISHING THE SIZE OF SAMPLES FOR AUDITING FINANCIAL STATEMENTS

National Research Council Canada - National Science Library

Daniel Botez

2014-01-01

In auditing financial statements, the procedures for the assessment of the risks and the calculation of the materiality differ from an auditor to another, by audit cabinet policy or advice professional bodies...

14. Automated Gel Size Selection to Improve the Quality of Next-generation Sequencing Libraries Prepared from Environmental Water Samples.

Science.gov (United States)

Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick

2015-04-17

Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.

15. Uncertainty in nutrient loads from tile-drained landscapes: Effect of sampling frequency, calculation algorithm, and compositing strategy

Science.gov (United States)

Williams, Mark R.; King, Kevin W.; Macrae, Merrin L.; Ford, William; Van Esbroeck, Chris; Brunke, Richard I.; English, Michael C.; Schiff, Sherry L.

2015-11-01

16. The Quantitative LOD Score: Test Statistic and Sample Size for Exclusion and Linkage of Quantitative Traits in Human Sibships

OpenAIRE

Page, Grier P.; Amos, Christopher I.; Boerwinkle, Eric

1998-01-01

We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, ...

17. Path-integral Mayer-sampling calculations of the quantum Boltzmann contribution to virial coefficients of helium-4.

Science.gov (United States)

Shaul, Katherine R S; Schultz, Andrew J; Kofke, David A

2012-11-14

We present Mayer-sampling Monte Carlo calculations of the quantum Boltzmann contribution to the virial coefficients B(n), as defined by path integrals, for n = 2 to 4 and for temperatures from 2.6 K to 1000 K, using state-of-the-art ab initio potentials for interactions within pairs and triplets of helium-4 atoms. Effects of exchange are not included. The vapor-liquid critical temperature of the resulting fourth-order virial equation of state is 5.033(16) K, a value only 3% less than the critical temperature of helium-4: 5.19 K. We describe an approach for parsing the Boltzmann contribution into components that reduce the number of Mayer-sampling Monte Carlo steps required for components with large per-step time requirements. We estimate that in this manner the calculation of the Boltzmann contribution to B(3) at 2.6 K is completed at least 100 times faster than the previously reported approach.

18. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

Science.gov (United States)

Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

2017-10-01

Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

19. Uranium and radium activities measurements and calculation of effective doses in some drinking water samples in Morocco

Directory of Open Access Journals (Sweden)

Oum Keltoum Hakam

2015-09-01

Full Text Available Purpose: As a way of prevention, we have measured the activities of uranium and radium isotopes (234U, 238U, 226Ra, 228Ra for 30 drinking water samples collected from 11 wells, 9 springs (6 hot and 3 cold, 3 commercialised mineral water, and 7 tap water samples. Methods: Activities of the Ra isotopes were measured by ultra-gamma spectrometry using a low background and high efficiency well type germanium detector. The U isotopes were counted in an alpha spectrometer.Results: The measured Uranium and radium activities are similar to those published for other non-polluting regions of the world. Except in one commercialised gaseous water sample, and in two hot spring water samples, the calculated effective doses during one year are inferior to the reference level of 0.1 mSv/year recommended by the International Commission on Radiological Protection. Conclusion: These activities don't present any risk for public health in Morocco. The sparkling water of Oulmes is occasionally consumed as table water and waters of warm springs are not used as main sources of drinking water.

20. Single-case effect size calculation: comparing regression and non-parametric approaches across previously published reading intervention data sets.

Science.gov (United States)

Ross, Sarah G; Begeny, John C

2014-08-01

Growing from demands for accountability and research-based practice in the field of education, there is recent focus on developing standards for the implementation and analysis of single-case designs. Effect size methods for single-case designs provide a useful way to discuss treatment magnitude in the context of individual intervention. Although a standard effect size methodology does not yet exist within single-case research, panel experts recently recommended pairing regression and non-parametric approaches when analyzing effect size data. This study compared two single-case effect size methods: the regression-based, Allison-MT method and the newer, non-parametric, Tau-U method. Using previously published research that measured the Words read Correct per Minute (WCPM) variable, these two methods were examined by comparing differences in overall effect size scores and rankings of intervention effect. Results indicated that the regression method produced significantly larger effect sizes than the non-parametric method, but the rankings of the effect size scores had a strong, positive relation. Implications of these findings for research and practice are discussed. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

1. A Size Exclusion HPLC Method for Evaluating the Individual Impacts of Sugars and Organic Acids on Beverage Global Taste by Means of Calculated Dose-Over-Threshold Values

Directory of Open Access Journals (Sweden)

Luís G. Dias

2014-09-01

Full Text Available In this work, the main organic acids (citric, malic and ascorbic acids and sugars (glucose, fructose and sucrose present in commercial fruit beverages (fruit carbonated soft-drinks, fruit nectars and fruit juices were determined. A novel size exclusion high performance liquid chromatography isocratic green method, with ultraviolet and refractive index detectors coupled in series, was developed. This methodology enabled the simultaneous quantification of sugars and organic acids without any sample pre-treatment, even when peak interferences occurred. The method was in-house validated, showing a good linearity (R > 0.999, adequate detection and quantification limits (20 and 280 mg L−1, respectively, satisfactory instrumental and method precisions (relative standard deviations lower than 6% and acceptable method accuracy (relative error lower than 5%. Sugars and organic acids profiles were used to calculate dose-over-threshold values, aiming to evaluate their individual sensory impact on beverage global taste perception. The results demonstrated that sucrose, fructose, ascorbic acid, citric acid and malic acid have the greater individual sensory impact in the overall taste of a specific beverage. Furthermore, although organic acids were present in lower concentrations than sugars, their taste influence was significant and, in some cases, higher than the sugars’ contribution towards the global sensory perception.

2. Diffuse myocardial fibrosis evaluation using cardiac magnetic resonance T1 mapping: sample size considerations for clinical trials

Directory of Open Access Journals (Sweden)

Liu Songtao

2012-12-01

Full Text Available Abstract Background Cardiac magnetic resonance (CMR T1 mapping has been used to characterize myocardial diffuse fibrosis. The aim of this study is to determine the reproducibility and sample size of CMR fibrosis measurements that would be applicable in clinical trials. Methods A modified Look-Locker with inversion recovery (MOLLI sequence was used to determine myocardial T1 values pre-, and 12 and 25min post-administration of a gadolinium-based contrast agent at 3 Tesla. For 24 healthy subjects (8 men; 29 ± 6 years, two separate scans were obtained a with a bolus of 0.15mmol/kg of gadopentate dimeglumine and b 0.1mmol/kg of gadobenate dimeglumine, respectively, with averaged of 51 ± 34 days between two scans. Separately, 25 heart failure subjects (12 men; 63 ± 14 years, were evaluated after a bolus of 0.15mmol/kg of gadopentate dimeglumine. Myocardial partition coefficient (λ was calculated according to (ΔR1myocardium/ΔR1blood, and ECV was derived from λ by adjusting (1-hematocrit. Results Mean ECV and λ were both significantly higher in HF subjects than healthy (ECV: 0.287 ± 0.034 vs. 0.267 ± 0.028, p=0.002; λ: 0.481 ± 0.052 vs. 442 ± 0.037, p Conclusion ECV and λ quantification have a low variability across scans, and could be a viable tool for evaluating clinical trial outcome.

3. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

Science.gov (United States)

NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

2017-08-01

Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

4. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

Science.gov (United States)

Selbig, William R.; Bannerman, Roger T.

2011-01-01

The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

5. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

Directory of Open Access Journals (Sweden)

Simon Boitard

2016-03-01

Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

6. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

DEFF Research Database (Denmark)

Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

2013-01-01

SUMMARY Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average meas...... subsp. paratuberculosis infection, in Danish dairy cattle and a study on critical control points for Salmonella cross-contamination of pork, in Greek slaughterhouses....

7. Sample size calculations for a split-cluster, beta-binomial design in the assessment of toxicity

NARCIS (Netherlands)

Hendriks, J.C.M.; Teerenstra, S.; Punt-Van der Zalm, J.P.; Wetzels, A.M.M.; Westphal, J.R.; Borm, G.F.

2005-01-01

Mouse embryo assays are recommended to test materials used for in vitro fertilization for toxicity. In such assays, a number of embryos is divided in a control group, which is exposed to a neutral medium, and a test group, which is exposed to a potentially toxic medium. Inferences on toxicity are

8. Validation of fixed sample size plans for monitoring lepidopteran pests of Brassica oleracea crops in North Korea.

Science.gov (United States)

Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J

2009-06-01

The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.

9. A simple method to generate equal-sized homogenous strata or clusters for population-based sampling.

Science.gov (United States)

Elliott, Michael R

2011-04-01

Statistical efficiency and cost efficiency can be achieved in population-based samples through stratification and/or clustering. Strata typically combine subgroups of the population that are similar with respect to an outcome. Clusters are often taken from preexisting units, but may be formed to minimize between-cluster variance, or to equalize exposure to a treatment or risk factor. Area probability sample design procedures for the National Children's Study required contiguous strata and clusters that maximized within-stratum and within-cluster homogeneity while maintaining approximately equal size of the strata or clusters. However, there were few methods that allowed such strata or clusters to be constructed under these contiguity and equal size constraints. A search algorithm generates equal-size cluster sets that approximately span the space of all possible clusters of equal size. An optimal cluster set is chosen based on analysis of variance and convexity criteria. The proposed algorithm is used to construct 10 strata based on demographics and air pollution measures in Kent County, MI, following census tract boundaries. A brief simulation study is also conducted. The proposed algorithm is effective at uncovering underlying clusters from noisy data. It can be used in multi-stage sampling where equal-size strata or clusters are desired. Copyright © 2011 Elsevier Inc. All rights reserved.

10. Effects of sample size on differential gene expression, rank order and prediction accuracy of a gene signature.

Directory of Open Access Journals (Sweden)

Cynthia Stretch

Full Text Available Top differentially expressed gene lists are often inconsistent between studies and it has been suggested that small sample sizes contribute to lack of reproducibility and poor prediction accuracy in discriminative models. We considered sex differences (69♂, 65 ♀ in 134 human skeletal muscle biopsies using DNA microarray. The full dataset and subsamples (n = 10 (5 ♂, 5 ♀ to n = 120 (60 ♂, 60 ♀ thereof were used to assess the effect of sample size on the differential expression of single genes, gene rank order and prediction accuracy. Using our full dataset (n = 134, we identified 717 differentially expressed transcripts (p<0.0001 and we were able predict sex with ~90% accuracy, both within our dataset and on external datasets. Both p-values and rank order of top differentially expressed genes became more variable using smaller subsamples. For example, at n = 10 (5 ♂, 5 ♀, no gene was considered differentially expressed at p<0.0001 and prediction accuracy was ~50% (no better than chance. We found that sample size clearly affects microarray analysis results; small sample sizes result in unstable gene lists and poor prediction accuracy. We anticipate this will apply to other phenotypes, in addition to sex.

11. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

Science.gov (United States)

Hans T. Schreuder; Jin-Mann S. Lin; John Teply

2000-01-01

The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

12. Analytical solutions to sampling effects in drop size distribution measurements during stationary rainfall: Estimation of bulk rainfall variables

NARCIS (Netherlands)

Uijlenhoet, R.; Porrà, J.M.; Sempere Torres, D.; Creutin, J.D.

2006-01-01

A stochastic model of the microstructure of rainfall is used to derive explicit expressions for the magnitude of the sampling fluctuations in rainfall properties estimated from raindrop size measurements in stationary rainfall. The model is a marked point process, in which the points represent the

13. Survey Research: Determining Sample Size and Representative Response. and The Effects of Computer Use on Keyboarding Technique and Skill.

Science.gov (United States)

Wunsch, Daniel R.; Gades, Robert E.

1986-01-01

Two articles are presented. The first reviews and suggests procedures for determining appropriate sample sizes and for determining the response representativeness in survey research. The second presents a study designed to determine the effects of computer use on keyboarding technique and skill. (CT)

14. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

Science.gov (United States)

Algina, James; Keselman, H. J.

2008-01-01

Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

15. RISK-ASSESSMENT PROCEDURES AND ESTABLISHING THE SIZE OF SAMPLES FOR AUDITING FINANCIAL STATEMENTS

Directory of Open Access Journals (Sweden)

Daniel Botez

2014-12-01

Full Text Available In auditing financial statements, the procedures for the assessment of the risks and the calculation of the materiality differ from an auditor to another, by audit cabinet policy or advice professional bodies. All, however, have the reference International Audit Standards ISA 315 “Identifying and assessing the risks of material misstatement through understanding the entity and its environment” and ISA 320 “Materiality in planning and performing an audit”. On the basis of specific practices auditors in Romania, the article shows some laborious and examples of these aspects. Such considerations are presented evaluation of the general inherent risk, a specific inherent risk, the risk of control and the calculation of the materiality.

16. Bayesian adaptive approach to estimating sample sizes for seizures of illicit drugs.

Science.gov (United States)

Moroni, Rossana; Aalberg, Laura; Reinikainen, Tapani; Corander, Jukka

2012-01-01

A considerable amount of discussion can be found in the forensics literature about the issue of using statistical sampling to obtain for chemical analyses an appropriate subset of units from a police seizure suspected to contain illicit material. Use of the Bayesian paradigm has been suggested as the most suitable statistical approach to solving the question of how large a sample needs to be to ensure legally and practically acceptable purposes. Here, we introduce a hypergeometric sampling model combined with a specific prior distribution for the homogeneity of the seizure, where a parameter for the analyst's expectation of homogeneity (α) is included. Our results show how an adaptive approach to sampling can minimize the practical efforts needed in the laboratory analyses, as the model allows the scientist to decide sequentially how to proceed, while maintaining a sufficiently high confidence in the conclusions. © 2011 American Academy of Forensic Sciences.

17. Measuring proteins with greater speed and resolution while reducing sample size

OpenAIRE

Hsieh, Vincent H.; Wyatt, Philip J.

2017-01-01

A multi-angle light scattering (MALS) system, combined with chromatographic separation, directly measures the absolute molar mass, size and concentration of the eluate species. The measurement of these crucial properties in solution is essential in basic macromolecular characterization and all research and production stages of bio-therapeutic products. We developed a new MALS methodology that has overcome the long-standing, stubborn barrier to microliter-scale peak volumes and achieved the hi...

18. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

Science.gov (United States)

Ellison, Laura E.; Lukacs, Paul M.

2014-01-01

Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

19. Sample Size Effect of Magnetomechanical Response for Magnetic Elastomers by Using Permanent Magnets

Directory of Open Access Journals (Sweden)

Tsubasa Oguro

2017-01-01

Full Text Available The size effect of magnetomechanical response of chemically cross-linked disk shaped magnetic elastomers placed on a permanent magnet has been investigated by unidirectional compression tests. A cylindrical permanent magnet with a size of 35 mm in diameter and 15 mm in height was used to create the magnetic field. The magnetic field strength was approximately 420 mT at the center of the upper surface of the magnet. The diameter of the magnetoelastic polymer disks was varied from 14 mm to 35 mm, whereas the height was kept constant (5 mm in the undeformed state. We have studied the influence of the disk diameter on the stress-strain behavior of the magnetoelastic in the presence and in the lack of magnetic field. It was found that the smallest magnetic elastomer with 14 mm diameter did not exhibit measurable magnetomechanical response due to magnetic field. On the opposite, the magnetic elastomers with diameters larger than 30 mm contracted in the direction parallel to the mechanical stress and largely elongated in the perpendicular direction. An explanation is put forward to interpret this size-dependent behavior by taking into account the nonuniform field distribution of magnetic field produced by the permanent magnet.

20. Use of High-Frequency In-Home Monitoring Data May Reduce Sample Sizes Needed in Clinical Trials.

Directory of Open Access Journals (Sweden)

Hiroko H Dodge

Full Text Available Trials in Alzheimer's disease are increasingly focusing on prevention in asymptomatic individuals. This poses a challenge in examining treatment effects since currently available approaches are often unable to detect cognitive and functional changes among asymptomatic individuals. Resultant small effect sizes require large sample sizes using biomarkers or secondary measures for randomized controlled trials (RCTs. Better assessment approaches and outcomes capable of capturing subtle changes during asymptomatic disease stages are needed.We aimed to develop a new approach to track changes in functional outcomes by using individual-specific distributions (as opposed to group-norms of unobtrusive continuously monitored in-home data. Our objective was to compare sample sizes required to achieve sufficient power to detect prevention trial effects in trajectories of outcomes in two scenarios: (1 annually assessed neuropsychological test scores (a conventional approach, and (2 the likelihood of having subject-specific low performance thresholds, both modeled as a function of time.One hundred nineteen cognitively intact subjects were enrolled and followed over 3 years in the Intelligent Systems for Assessing Aging Change (ISAAC study. Using the difference in empirically identified time slopes between those who remained cognitively intact during follow-up (normal control, NC and those who transitioned to mild cognitive impairment (MCI, we estimated comparative sample sizes required to achieve up to 80% statistical power over a range of effect sizes for detecting reductions in the difference in time slopes between NC and MCI incidence before transition.Sample size estimates indicated approximately 2000 subjects with a follow-up duration of 4 years would be needed to achieve a 30% effect size when the outcome is an annually assessed memory test score. When the outcome is likelihood of low walking speed defined using the individual-specific distributions of

1. Optically stimulated luminescence dating as a tool for calculating sedimentation rates in Chinese loess: comparisons with grain-size records

DEFF Research Database (Denmark)

Stevens, Thomas; Lu, HY

2009-01-01

over the late Pleistocene and Holocene. The results demonstrate that sedimentation rates are site specific, extremely variable over millennial timescales and that this variation is often not reflected in grain-size changes. In the central part of the Loess Plateau, the relationship between grain...

2. Thermoelectric properties of nanocrystalline Sb2Te3 thin films: experimental evaluation and first-principles calculation, addressing effect of crystal grain size

Science.gov (United States)

Morikawa, Satoshi; Inamoto, Takuya; Takashiri, Masayuki

2018-02-01

The effect of crystal grain size on the thermoelectric properties of nanocrystalline antimony telluride (Sb2Te3) thin films was investigated by experiments and first-principles studies using a developed relaxation time approximation. The Sb2Te3 thin films were deposited on glass substrates using radio-frequency magnetron sputtering. To change the crystal grain size of the Sb2Te3 thin films, thermal annealing was performed at different temperatures. The crystal grain size, lattice parameter, and crystal orientation of the thin films were estimated using XRD patterns. The carrier concentration and in-plane thermoelectric properties of the thin films were measured at room temperature. A theoretical analysis was performed using a first-principles study based on density functional theory. The electronic band structures of Sb2Te3 were calculated using different lattice parameters, and the thermoelectric properties were predicted based on the semi-classical Boltzmann transport equation in the relaxation time approximation. In particular, we introduced the effect of carrier scattering at the grain boundaries into the relaxation time approximation by estimating the group velocities from the electronic band structures. Finally, the experimentally measured thermoelectric properties were compared with those obtained by calculation. As a result, the calculated thermoelectric properties were found to be in good agreement with the experimental results. Therefore, we can conclude that introducing the effect of carrier scattering at the grain boundaries into the relaxation time approximation contributes to enhance the accuracy of a first-principles calculation relating to nanocrystalline materials.

3. Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study.

Science.gov (United States)

Teare, M Dawn; Dimairo, Munyaradzi; Shephard, Neil; Hayman, Alex; Whitehead, Amy; Walters, Stephen J

2014-07-03

External pilot or feasibility studies can be used to estimate key unknown parameters to inform the design of the definitive randomised controlled trial (RCT). However, there is little consensus on how large pilot studies need to be, and some suggest inflating estimates to adjust for the lack of precision when planning the definitive RCT. We use a simulation approach to illustrate the sampling distribution of the standard deviation for continuous outcomes and the event rate for binary outcomes. We present the impact of increasing the pilot sample size on the precision and bias of these estimates, and predicted power under three realistic scenarios. We also illustrate the consequences of using a confidence interval argument to inflate estimates so the required power is achieved with a pre-specified level of confidence. We limit our attention to external pilot and feasibility studies prior to a two-parallel-balanced-group superiority RCT. For normally distributed outcomes, the relative gain in precision of the pooled standard deviation (SDp) is less than 10% (for each five subjects added per group) once the total sample size is 70. For true proportions between 0.1 and 0.5, we find the gain in precision for each five subjects added to the pilot sample is less than 5% once the sample size is 60. Adjusting the required sample sizes for the imprecision in the pilot study estimates can result in excessively large definitive RCTs and also requires a pilot sample size of 60 to 90 for the true effect sizes considered here. We recommend that an external pilot study has at least 70 measured subjects (35 per group) when estimating the SDp for a continuous outcome. If the event rate in an intervention group needs to be estimated by the pilot then a total of 60 to 100 subjects is required. Hence if the primary outcome is binary a total of at least 120 subjects (60 in each group) may be required in the pilot trial. It is very much more efficient to use a larger pilot study, than to

4. A convenient method and numerical tables for sample size determination in longitudinal-experimental research using multilevel models.

Science.gov (United States)

Usami, Satoshi

2014-12-01

Recent years have shown increased awareness of the importance of sample size determination in experimental research. Yet effective and convenient methods for sample size determination, especially in longitudinal experimental design, are still under development, and application of power analysis in applied research remains limited. This article presents a convenient method for sample size determination in longitudinal experimental research using a multilevel model. A fundamental idea of this method is transformation of model parameters (level 1 error variance [σ(2)], level 2 error variances [τ 00, τ 11] and its covariance [τ 01, τ 10], and a parameter representing experimental effect [δ]) into indices (reliability of measurement at the first time point [ρ 1], effect size at the last time point [Δ T ], proportion of variance of outcomes between the first and the last time points [k], and level 2 error correlation [r]) that are intuitively understandable and easily specified. To foster more convenient use of power analysis, numerical tables are constructed that refer to ANOVA results to investigate the influence on statistical power by respective indices.

5. [Comparison of characteristics of heavy metals in different grain sizes of intertidalite sediment by using grid sampling method].

Science.gov (United States)

Liang, Tao; Chen, Yan; Zhang, Chao-sheng; Li, Hai-tao; Chong, Zhong-yi; Song, Wen-chong

2008-02-01

384 surface sediment samples were collected from mud flat, silt flat and mud-silt flat of Bohai Bay by 1 m and 10 m interval using grid sampling method. Concentrations of Al, Fe, Ti, Mn, Ba, Sr, Zn, Cr, Ni and Cu in each sample were measured by ICP-AES. To figure out the random distribution and concentration characteristics of these heavy metals, concentration of them were compared between districts with different grain size. The results show that varieties of grain size cause the remarkable difference in the concentration of heavy metals. Total concentration of heavy metals are 147.37 g x kg(-1), 98.68 g x kg(-1) and 94.27 g x kg(-1) in mud flat, mud-silt flat and silt flat respectively. Majority of heavy metals inclines to concentrate in fine grained mud, while Ba and Sr have a tendency to concentrate in coast grained silt which contains more K2O * Al2O3 * 6SiO2. Concentration of Sr is affected significantly by the grain size, while concentration of Cr and Ti are affected a little by the grain size.

6. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort.

Science.gov (United States)

Cantarello, Elena; Steck, Claude E; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

2010-03-01

Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy's regions (average area 15,000 km(2)) and provinces (2,900 km(2)). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

7. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort

Science.gov (United States)

Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

2010-03-01

Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

8. Early detection of nonnative alleles in fish populations: When sample size actually matters

Science.gov (United States)

Croce, Patrick Della; Poole, Geoffrey C.; Payne, Robert A.; Gresswell, Bob

2017-01-01

Reliable detection of nonnative alleles is crucial for the conservation of sensitive native fish populations at risk of introgression. Typically, nonnative alleles in a population are detected through the analysis of genetic markers in a sample of individuals. Here we show that common assumptions associated with such analyses yield substantial overestimates of the likelihood of detecting nonnative alleles. We present a revised equation to estimate the likelihood of detecting nonnative alleles in a population with a given level of admixture. The new equation incorporates the effects of the genotypic structure of the sampled population and shows that conventional methods overestimate the likelihood of detection, especially when nonnative or F-1 hybrid individuals are present. Under such circumstances—which are typical of early stages of introgression and therefore most important for conservation efforts—our results show that improved detection of nonnative alleles arises primarily from increasing the number of individuals sampled rather than increasing the number of genetic markers analyzed. Using the revised equation, we describe a new approach to determining the number of individuals to sample and the number of diagnostic markers to analyze when attempting to monitor the arrival of nonnative alleles in native populations.

9. Children's Use of Sample Size and Diversity Information within Basic-Level Categories.

Science.gov (United States)

Gutheil, Grant; Gelman, Susan A.

1997-01-01

Three studies examined the ability of 8- and 9-year-olds and young adults to use sample monotonicity and diversity information according to the similarity-coverage model of category-based induction. Found that children's difficulty with this information was independent of category level, and may be based on preferences for other strategies…

10. All-electron LCAO calculations of the LiF crystal phonon spectrum: Influence of the basis set, the exchange-correlation functional, and the supercell size.

Science.gov (United States)

Evarestov, R A; Losev, M V

2009-12-01

For the first time the convergence of the phonon frequencies and dispersion curves in terms of the supercell size is studied in ab initio frozen phonon calculations on LiF crystal. Helmann-Feynman forces over atomic displacements are found in all-electron calculations with the localized atomic functions (LCAO) basis using CRYSTAL06 program. The Parlinski-Li-Kawazoe method and FROPHO program are used to calculate the dynamical matrix and phonon frequencies of the supercells. For fcc lattice, it is demonstrated that use of the full supercell space group (including the supercell inner translations) enables to reduce essentially the number of the displacements under consideration. For Hartree-Fock (HF), PBE and hybrid PBE0, B3LYP, and B3PW exchange-correlation functionals the atomic basis set optimization is performed. The supercells up to 216 atoms (3 x 3 x 3 conventional unit cells) are considered. The phonon frequencies using the supercells of different size and shape are compared. For the commensurate with supercell k-points the best agreement of the theoretical results with the experimental data is found for B3PW exchange-correlation functional calculations with the optimized basis set. The phonon frequencies at the most non-commensurate k-points converged for the supercell consisting of 4 x 4 x 4 primitive cells and ensures the accuracy 1-2% in the thermodynamic properties calculated (the Helmholtz free energy, entropy, and heat capacity at the room temperature). (c) 2009 Wiley Periodicals, Inc.

11. Massively-parallel electron dynamics calculations in real-time and real-space: Toward applications to nanostructures of more than ten-nanometers in size

Energy Technology Data Exchange (ETDEWEB)

Noda, Masashi; Ishimura, Kazuya; Nobusada, Katsuyuki [Institute for Molecular Science, Myodaiji, Okazaki, Aichi 444-8585 (Japan); Yabana, Kazuhiro; Boku, Taisuke [Center for Computational Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577 (Japan)

2014-05-15

A highly efficient program of massively parallel calculations for electron dynamics has been developed in an effort to apply the method to optical response of nanostructures of more than ten-nanometers in size. The approach is based on time-dependent density functional theory calculations in real-time and real-space. The computational code is implemented by using simple algorithms with a finite-difference method in space derivative and Taylor expansion in time-propagation. Since the computational program is free from the algorithms of eigenvalue problems and fast-Fourier-transformation, which are usually implemented in conventional quantum chemistry or band structure calculations, it is highly suitable for massively parallel calculations. Benchmark calculations using the K computer at RIKEN demonstrate that the parallel efficiency of the program is very high on more than 60 000 CPU cores. The method is applied to optical response of arrays of C{sub 60} orderly nanostructures of more than 10 nm in size. The computed absorption spectrum is in good agreement with the experimental observation.

12. Joint risk of interbasin water transfer and impact of the window size of sampling low flows under environmental change

Science.gov (United States)

Tu, Xinjun; Du, Xiaoxia; Singh, Vijay P.; Chen, Xiaohong; Du, Yiliang; Li, Kun

2017-11-01

Constructing a joint distribution of low flows between the donor and recipient basins and analyzing their joint risk are commonly required for implementing interbasin water transfer. In this study, daily streamflow data of bi-basin low flows were sampled at window sizes from 3 to183 days by using the annual minimum method. The stationarity of low flows was tested by a change point analysis and non-stationary low flows were reconstructed by using the moving mean method. Three bivariate Archimedean copulas and five common univariate distributions were applied to fit the joint and marginal distributions of bi-basin low flows. Then, by considering the window size of sampling low flows under environmental change, the change in the joint risk of interbasin water transfer was investigated. Results showed that the non-stationarity of low flows in the recipient basin at all window sizes was significant due to the regulation of water reservoirs. The general extreme value distribution was found to fit the marginal distributions of bi-basin low flows. Three Archimedean copulas satisfactorily fitted the joint distribution of bi-basin low flows and then the Frank copula was found to be the comparatively better. The moving mean method differentiated the location parameter of the GEV distribution, but did not differentiate the scale and shape parameters, and the copula parameters. Due to environmental change, in particular the regulation of water reservoirs in the recipient basin, the decrease of the joint synchronous risk of bi-basin water shortage was slight, but those of the synchronous assurance of water transfer from the donor were remarkable. With the enlargement of window size of sampling low flows, both the joint synchronous risk of bi-basin water shortage, and the joint synchronous assurance of water transfer from the donor basin when there was a water shortage in the recipient basin exhibited a decreasing trend, but their changes were with a slight fluctuation, in

13. Size-dependent ultrafast ionization dynamics of nanoscale samples in intense femtosecond x-ray free-electron-laser pulses.

Science.gov (United States)

Schorb, Sebastian; Rupp, Daniela; Swiggers, Michelle L; Coffee, Ryan N; Messerschmidt, Marc; Williams, Garth; Bozek, John D; Wada, Shin-Ichi; Kornilov, Oleg; Möller, Thomas; Bostedt, Christoph

2012-06-08

All matter exposed to intense femtosecond x-ray pulses from the Linac Coherent Light Source free-electron laser is strongly ionized on time scales competing with the inner-shell vacancy lifetimes. We show that for nanoscale objects the environment, i.e., nanoparticle size, is an important parameter for the time-dependent ionization dynamics. The Auger lifetimes of large Ar clusters are found to be increased compared to small clusters and isolated atoms, due to delocalization of the valence electrons in the x-ray-induced nanoplasma. As a consequence, large nanometer-sized samples absorb intense femtosecond x-ray pulses less efficiently than small ones.

14. Quantification and size characterisation of silver nanoparticles in environmental aqueous samples and consumer products by single particle-ICPMS.

Science.gov (United States)

Aznar, Ramón; Barahona, Francisco; Geiss, Otmar; Ponti, Jessica; José Luis, Tadeo; Barrero-Moreno, Josefa

2017-12-01

Single particle-inductively coupled plasma mass spectrometry (SP-ICPMS) is a promising technique able to generate the number based-particle size distribution (PSD) of nanoparticles (NPs) in aqueous suspensions. However, SP-ICPMS analysis is not consolidated as routine-technique yet and is not typically applied to real test samples with unknown composition. This work presents a methodology to detect, quantify and characterise the number-based PSD of Ag-NPs in different environmental aqueous samples (drinking and lake waters), aqueous samples derived from migration tests and consumer products using SP-ICPMS. The procedure is built from a pragmatic view and involves the analysis of serial dilutions of the original sample until no variation in the measured size values is observed while keeping particle counts proportional to the dilution applied. After evaluation of the analytical figures of merit, the SP-ICPMS method exhibited excellent linearity (r2>0.999) in the range (1-25) × 104 particlesmL-1 for 30, 50 and 80nm nominal size Ag-NPs standards. The precision in terms of repeatability was studied according to the RSDs of the measured size and particle number concentration values and a t-test (p = 95%) at the two intermediate concentration levels was applied to determine the bias of SP-ICPMS size values compared to reference values. The method showed good repeatability and an overall acceptable bias in the studied concentration range. The experimental minimum detectable size for Ag-NPs ranged between 12 and 15nm. Additionally, results derived from direct SP-ICPMS analysis were compared to the results conducted for fractions collected by asymmetric flow-field flow fractionation and supernatant fractions after centrifugal filtration. The method has been successfully applied to determine the presence of Ag-NPs in: lake water; tap water; tap water filtered by a filter jar; seven different liquid silver-based consumer products; and migration solutions (pure water and

15. A spectroscopic sample of massive, quiescent z ∼ 2 galaxies: implications for the evolution of the mass-size relation

Energy Technology Data Exchange (ETDEWEB)

Krogager, J.-K.; Zirm, A. W.; Toft, S.; Man, A. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen O (Denmark); Brammer, G. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21210 (United States)

2014-12-10

We present deep, near-infrared Hubble Space Telescope/Wide Field Camera 3 grism spectroscopy and imaging for a sample of 14 galaxies at z ≈ 2 selected from a mass-complete photometric catalog in the COSMOS field. By combining the grism observations with photometry in 30 bands, we derive accurate constraints on their redshifts, stellar masses, ages, dust extinction, and formation redshifts. We show that the slope and scatter of the z ∼ 2 mass-size relation of quiescent galaxies is consistent with the local relation, and confirm previous findings that the sizes for a given mass are smaller by a factor of two to three. Finally, we show that the observed evolution of the mass-size relation of quiescent galaxies between z = 2 and 0 can be explained by the quenching of increasingly larger star forming galaxies at a rate dictated by the increase in the number density of quiescent galaxies with decreasing redshift. However, we find that the scatter in the mass-size relation should increase in the quenching-driven scenario in contrast to what is seen in the data. This suggests that merging is not needed to explain the evolution of the median mass-size relation of massive galaxies, but may still be required to tighten its scatter, and explain the size growth of individual z = 2 galaxies quiescent galaxies.

16. A fast method for rescaling voxel S values for arbitrary voxel sizes in targeted radionuclide therapy from a single Monte Carlo calculation.

Science.gov (United States)

Fernández, María; Hänscheid, Heribert; Mauxion, Thibault; Bardiès, Manuel; Kletting, Peter; Glatting, Gerhard; Lassmann, Michael

2013-08-01

In targeted radionuclide therapy, patient-specific dosimetry based on voxel S values (VSVs) is preferable to dosimetry based on mathematical phantoms. Monte-Carlo (MC) simulations are necessary to deduce VSVs for those voxel sizes required by quantitative imaging. The aim of this study is, starting from a single set of high-resolution VSVs obtained by MC simulations for a small voxel size along one single axis perpendicular to the source voxel, to present a suitable method to accurately calculate VSVs for larger voxel sizes. Accurate sets of VSVs for target voxel to source voxel distances up to 10 cm were obtained for high-resolution voxel sizes (0.5 mm for electrons and 1.0 mm for photons) from MC simulations for Y-90, Lu-177, and I-131 using the radiation transport code MCNPX v.2.7a. To make these values suitable to any larger voxel size, different analytical methods (based on resamplings, interpolations, and fits) were tested and compared to values obtained by direct MC simulations. As a result, an optimal calculation procedure is proposed. This procedure consisted of: (1) MC simulation for obtaining of a starting set of VSVs along a single line of voxels for a small voxel size for each radionuclide and type of radiation; (2) interpolation within the values obtained in point (1) for obtaining the VSVs for voxels within a spherical volume; (3) resampling of the data obtained in (1) and (2) for obtaining VSVs for voxels sizes larger than the one used for the MC calculation for integer voxel ratios (voxel ratio=new voxel size∕voxel size MC simulation); (4) interpolation on within the data obtained in (3) for integer voxel ratios. The results were also compared to results from other authors. The results obtained with the method proposed in this work show deviations relative to the source voxel below 1% for I-131 and Lu-177 and below 1.5% for Y-90 as compared with values obtained by direct MC simulations for voxel sizes ranging between 1.0 and 10.0 cm. The results

17. Measuring laves phase particle size and thermodynamic calculating its growth and coarsening behavior in P92 steels

DEFF Research Database (Denmark)

Yao, Bing-Yin; Zhou, Rong-Can; Fan, Chang-Xin

2010-01-01

The growth of Laves phase particles in three kinds of P92 steels were investigated. Laves phase particles can be easily separated and distinguished from the matrix and other particles by atom number contrast using comparisons of the backscatter electrons (BSE) images and the secondary electrons (...... attained between measurements in SEM and modeling by DICTRA. Ostwald ripening should be used for the coarsening calculation of Laves phase in P92 steels for time longer than 20000 h and 50000 h at 650°C and 600°C, respectively. © 2010 Chin. Soc. for Elec. Eng....

18. [Sample size for the estimation of F-wave parameters in healthy volunteers and amyotrophic lateral sclerosis patients].

Science.gov (United States)

Fang, J; Cui, L Y; Liu, M S; Guan, Y Z; Ding, Q Y; Du, H; Li, B H; Wu, S

2017-03-07

Objective: The study aimed to investigate whether sample sizes of F-wave study differed according to different nerves, different F-wave parameters, and amyotrophic lateral sclerosis(ALS) patients or healthy subjects. Methods: The F-waves in the median, ulnar, tibial, and deep peroneal nerves of 55 amyotrophic lateral sclerosis (ALS) patients and 52 healthy subjects were studied to assess the effect of sample size on the accuracy of measurements of the following F-wave parameters: F-wave minimum latency, maximum latency, mean latency, F-wave persistence, F-wave chronodispersion, mean and maximum F-wave amplitude. A hundred stimuli were used in F-wave study. The values obtained from 100 stimuli were considered "true" values and were compared with the corresponding values from smaller samples of 20, 40, 60 and 80 stimuli. F-wave parameters obtained from different sample sizes were compared between the ALS patients and the normal controls. Results: Significant differences were not detected with samples above 60 stimuli for chronodispersion in all four nerves in normal participants. Significant differences were not detected with samples above 40 stimuli for maximum F-wave amplitude in median, ulnar and tibial nerves in normal participants. When comparing ALS patients and normal controls, significant differences were detected in the maximum (median nerve, Z=-3.560, PF-wave latency (median nerve, Z=-3.243, PF-wave chronodispersion (Z=-3.152, PF-wave persistence in the median (Z=6.139, PF-wave amplitude in the tibial nerve(t=2.981, PF-wave amplitude in the ulnar (Z=-2.134, PF-wave persistence in tibial nerve (Z=2.119, PF-wave amplitude in ulnar (Z=-2.552, PF-wave amplitude in peroneal nerve (t=2.693, PF-wave study differed according to different nerves, different F-wave parameters , and ALS patients or healthy subjects.

19. SU-E-T-196: Comparative Analysis of Surface Dose Measurements Using MOSFET Detector and Dose Predicted by Eclipse - AAA with Varying Dose Calculation Grid Size

Energy Technology Data Exchange (ETDEWEB)

Badkul, R; Nejaiman, S; Pokhrel, D; Jiang, H; Kumar, P [University of Kansas Medical Center, Kansas City, KS (United States)

2015-06-15

Purpose: Skin dose can be the limiting factor and fairly common reason to interrupt the treatment, especially for treating head-and-neck with Intensity-modulated-radiation-therapy(IMRT) or Volumetrically-modulated - arc-therapy (VMAT) and breast with tangentially-directed-beams. Aim of this study was to investigate accuracy of near-surface dose predicted by Eclipse treatment-planning-system (TPS) using Anisotropic-Analytic Algorithm (AAA)with varying calculation grid-size and comparing with metal-oxide-semiconductor-field-effect-transistors(MOSFETs)measurements for a range of clinical-conditions (open-field,dynamic-wedge, physical-wedge, IMRT,VMAT). Methods: QUASAR™-Body-Phantom was used in this study with oval curved-surfaces to mimic breast, chest wall and head-and-neck sites.A CT-scan was obtained with five radio-opaque markers(ROM) placed on the surface of phantom to mimic the range of incident angles for measurements and dose prediction using 2mm slice thickness.At each ROM, small structure(1mmx2mm) were contoured to obtain mean-doses from TPS.Calculations were performed for open-field,dynamic-wedge,physical-wedge,IMRT and VMAT using Varian-21EX,6&15MV photons using twogrid-sizes:2.5mm and 1mm.Calibration checks were performed to ensure that MOSFETs response were within ±5%.Surface-doses were measured at five locations and compared with TPS calculations. Results: For 6MV: 2.5mm grid-size,mean calculated doses(MCD)were higher by 10%(±7.6),10%(±7.6),20%(±8.5),40%(±7.5),30%(±6.9) and for 1mm grid-size MCD were higher by 0%(±5.7),0%(±4.2),0%(±5.5),1.2%(±5.0),1.1% (±7.8) for open-field,dynamic-wedge,physical-wedge,IMRT,VMAT respectively.For 15MV: 2.5mm grid-size,MCD were higher by 30%(±14.6),30%(±14.6),30%(±14.0),40%(±11.0),30%(±3.5)and for 1mm grid-size MCD were higher by 10% (±10.6), 10%(±9.8),10%(±8.0),30%(±7.8),10%(±3.8) for open-field, dynamic-wedge, physical-wedge, IMRT, VMAT respectively.For 6MV, 86% and 56% of all measured values

20. Sex determination by tooth size in a sample of Greek population.

Science.gov (United States)

Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C

2014-08-01

Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.

1. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

Science.gov (United States)

Fraley, R. Chris; Vazire, Simine

2014-01-01

The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

2. Determining optimal sample sizes for multistage adaptive randomized clinical trials from an industry perspective using value of information methods.

Science.gov (United States)

Chen, Maggie H; Willan, Andrew R

2013-02-01

Most often, sample size determinations for randomized clinical trials are based on frequentist approaches that depend on somewhat arbitrarily chosen factors, such as type I and II error probabilities and the smallest clinically important difference. As an alternative, many authors have proposed decision-theoretic (full Bayesian) approaches, often referred to as value of information methods that attempt to determine the sample size that maximizes the difference between the trial's expected utility and its expected cost, referred to as the expected net gain. Taking an industry perspective, Willan proposes a solution in which the trial's utility is the increase in expected profit. Furthermore, Willan and Kowgier, taking a societal perspective, show that multistage designs can increase expected net gain. The purpose of this article is to determine the optimal sample size using value of information methods for industry-based, multistage adaptive randomized clinical trials, and to demonstrate the increase in expected net gain realized. At the end of each stage, the trial's sponsor must decide between three actions: continue to the next stage, stop the trial and seek regulatory approval, or stop the trial and abandon the drug. A model for expected total profit is proposed that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, and the relationship between trial results and probability of regulatory approval. The proposed method is extended to include multistage designs with a solution provided for a two-stage design. An example is given. Significant increases in the expected net gain are realized by using multistage designs. The complexity of the solutions increases with the number of stages, although far simpler near-optimal solutions exist. The method relies on the central limit theorem, assuming that the sample size is sufficiently large so that the relevant statistics are normally distributed. From a value of

3. A score test for determining sample size in matched case-control studies with categorical exposure.

Science.gov (United States)

Sinha, Samiran; Mukherjee, Bhramar

2006-02-01

The paper considers the problem of determining the number of matched sets in 1 : M matched case-control studies with a categorical exposure having k + 1 categories, k > or = 1. The basic interest lies in constructing a test statistic to test whether the exposure is associated with the disease. Estimates of the k odds ratios for 1 : M matched case-control studies with dichotomous exposure and for 1 : 1 matched case-control studies with exposure at several levels are presented in Breslow and Day (1980), but results holding in full generality were not available so far. We propose a score test for testing the hypothesis of no association between disease and the polychotomous exposure. We exploit the power function of this test statistic to calculate the required number of matched sets to detect specific departures from the null hypothesis of no association. We also consider the situation when there is a natural ordering among the levels of the exposure variable. For ordinal exposure variables, we propose a test for detecting trend in disease risk with increasing levels of the exposure variable. Our methods are illustrated with two datasets, one is a real dataset on colorectal cancer in rats and the other a simulated dataset for studying disease-gene association.

4. Fixed and Adaptive Parallel Subgroup-Specific Design for Survival Outcomes: Power and Sample Size

Directory of Open Access Journals (Sweden)

Miranta Antoniou

2017-12-01

Full Text Available Biomarker-guided clinical trial designs, which focus on testing the effectiveness of a biomarker-guided approach to treatment in improving patient health, have drawn considerable attention in the era of stratified medicine with many different designs being proposed in the literature. However, planning such trials to ensure they have sufficient power to test the relevant hypotheses can be challenging and the literature often lacks guidance in this regard. In this study, we focus on the parallel subgroup-specific design, which allows the evaluation of separate treatment effects in the biomarker-positive subgroup and biomarker-negative subgroup simultaneously. We also explore an adaptive version of the design, where an interim analysis is undertaken based on a fixed percentage of target events, with the option to stop each biomarker-defined subgroup early for futility or efficacy. We calculate the number of events and patients required to ensure sufficient power in each of the biomarker-defined subgroups under different scenarios when the primary outcome is time-to-event. For the adaptive version, stopping probabilities are also explored. Since multiple hypotheses are being tested simultaneously, and multiple interim analyses are undertaken, we also focus on controlling the overall type I error rate by way of multiplicity adjustment.

5. RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes

Directory of Open Access Journals (Sweden)

Danny J. Kelly

2005-01-01

Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.

6. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

KAUST Repository

Dong, Kai

2015-09-16

DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

7. Distribution of human waste samples in relation to sizing waste processing in space

Science.gov (United States)

Parker, Dick; Gallagher, S. K.

1992-01-01

Human waste processing for closed ecological life support systems (CELSS) in space requires that there be an accurate knowledge of the quantity of wastes produced. Because initial CELSS will be handling relatively few individuals, it is important to know the variation that exists in the production of wastes rather than relying upon mean values that could result in undersizing equipment for a specific crew. On the other hand, because of the costs of orbiting equipment, it is important to design the equipment with a minimum of excess capacity because of the weight that extra capacity represents. A considerable quantity of information that had been independently gathered on waste production was examined in order to obtain estimates of equipment sizing requirements for handling waste loads from crews of 2 to 20 individuals. The recommended design for a crew of 8 should hold 34.5 liters per day (4315 ml/person/day) for urine and stool water and a little more than 1.25 kg per day (154 g/person/day) of human waste solids and sanitary supplies.

8. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

Directory of Open Access Journals (Sweden)

Jamshid Jamali

2017-01-01

Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

9. Dealing with large sample sizes: comparison of a new one spot dot blot method to western blot.

Science.gov (United States)

Putra, Sulistyo Emantoko Dwi; Tsuprykov, Oleg; Von Websky, Karoline; Ritter, Teresa; Reichetzeder, Christoph; Hocher, Berthold

2014-01-01

Western blot is the gold standard method to determine individual protein expression levels. However, western blot is technically difficult to perform in large sample sizes because it is a time consuming and labor intensive process. Dot blot is often used instead when dealing with large sample sizes, but the main disadvantage of the existing dot blot techniques, is the absence of signal normalization to a housekeeping protein. In this study we established a one dot two development signals (ODTDS) dot blot method employing two different signal development systems. The first signal from the protein of interest was detected by horseradish peroxidase (HRP). The second signal, detecting the housekeeping protein, was obtained by using alkaline phosphatase (AP). Inter-assay results variations within ODTDS dot blot and western blot and intra-assay variations between both methods were low (1.04-5.71%) as assessed by coefficient of variation. ODTDS dot blot technique can be used instead of western blot when dealing with large sample sizes without a reduction in results accuracy.

10. Fast Calculation of Protein-Protein Binding Free Energies using Umbrella Sampling with a Coarse-Grained Model.

Science.gov (United States)

Patel, Jagdish Suresh; Ytreberg, F Marty

2017-12-29

Determination of protein-protein binding affinity values is key to understanding various underlying biological phenomena, such as how mutations change protein-protein binding. Most existing non-rigorous (fast) and rigorous (slow) methods that rely on all-atom representation of the proteins force the user to choose between speed and accuracy. In an attempt to achieve balance between speed and accuracy, we have combined rigorous umbrella sampling molecular dynamics simulation with a coarse-grained protein model. We predicted the effect of mutations on binding affinity by selecting three protein-protein systems and comparing results to empirical relative binding affinity values, and to non-rigorous modeling approaches. We obtained significant improvement both in our ability to discern stabilizing from destabilizing mutations and in the correlation between predicted and experimental values compared to non-rigorous approaches. Overall our results suggest that using a rigorous affinity calculation method with coarse-grained protein models could offer fast and reliable predictions of protein-protein binding free energies.

11. On realistic size equivalence and shape of spheroidal Saharan mineral dust particles applied in solar and thermal radiative transfer calculations

Directory of Open Access Journals (Sweden)

S. Otto

2011-05-01

Full Text Available Realistic size equivalence and shape of Saharan mineral dust particles are derived from in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006, dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface equivalent spheroids with prolate shape are most realistic: particle non-sphericity only slightly affects single scattering albedo and asymmetry parameter but may enhance extinction coefficient by up to 10 %. At the bottom of the atmosphere (BOA the Saharan mineral dust always leads to a loss of solar radiation, while the sign of the forcing at the top of the atmosphere (TOA depends on surface albedo: solar cooling/warming over a mean ocean/land surface. In the thermal spectral range the dust inhibits the emission of radiation to space and warms the BOA. The most realistic case of particle non-sphericity causes changes of total (solar plus thermal forcing by 55/5 % at the TOA over ocean/land and 15 % at the BOA over both land and ocean and enhances total radiative heating within the dust plume by up to 20 %. Large dust particles significantly contribute to all the radiative effects reported. They strongly enhance the absorbing properties and forward scattering in the solar and increase predominantly, e.g., the total TOA forcing of the dust over land.

12. Effects of dislocation density and sample-size on plastic yielding at the nanoscale: a Weibull-like framework.

Science.gov (United States)

Rinaldi, Antonio

2011-11-01

Micro-compression tests have demonstrated that plastic yielding in nanoscale pillars is the result of the fine interplay between the sample-size (chiefly the diameter D) and the density of bulk dislocations ρ. The power-law scaling typical of the nanoscale stems from a source-limited regime, which depends on both these sample parameters. Based on the experimental and theoretical results available in the literature, this paper offers a perspective about the joint effect of D and ρ on the yield stress in any plastic regime, promoting also a schematic graphical map of it. In the sample-size dependent regime, such dependence is cast mathematically into a first order Weibull-type theory, where the power-law scaling the power exponent β and the modulus m of an approximate (unimodal) Weibull distribution of source-strengths can be related by a simple inverse proportionality. As a corollary, the scaling exponent β may not be a universal number, as speculated in the literature. In this context, the discussion opens the alternative possibility of more general (multimodal) source-strength distributions, which could produce more complex and realistic strengthening patterns than the single power-law usually assumed. The paper re-examines our own experimental data, as well as results of Bei et al. (2008) on Mo-alloy pillars, especially for the sake of emphasizing the significance of a sudden increase in sample response scatter as a warning signal of an incipient source-limited regime.

13. The design of high-temperature thermal conductivity measurements apparatus for thin sample size

Directory of Open Access Journals (Sweden)

2017-01-01

Full Text Available This study presents the designing, constructing and validating processes of thermal conductivity apparatus using steady-state heat-transfer techniques with the capability of testing a material at high temperatures. This design is an improvement from ASTM D5470 standard where meter-bars with the equal cross-sectional area were used to extrapolate surface temperature and measure heat transfer across a sample. There were two meter-bars in apparatus where each was placed three thermocouples. This Apparatus using a heater with a power of 1,000 watts, and cooling water to stable condition. The pressure applied was 3.4 MPa at the cross-sectional area of 113.09 mm2 meter-bar and thermal grease to minimized interfacial thermal contact resistance. To determine the performance, the validating process proceeded by comparing the results with thermal conductivity obtained by THB 500 made by LINSEIS. The tests showed the thermal conductivity of the stainless steel and bronze are 15.28 Wm-1K-1 and 38.01 Wm-1K-1 with a difference of test apparatus THB 500 are −2.55% and 2.49%. Furthermore, this apparatus has the capability to measure the thermal conductivity of the material to a temperature of 400°C where the results for the thermal conductivity of stainless steel is 19.21 Wm-1K-1 and the difference was 7.93%.

14. Extending the molecular size in accurate quantum-chemical calculations: the equilibrium structure and spectroscopic properties of uracil.

Science.gov (United States)

Puzzarini, Cristina; Barone, Vincenzo

2011-04-21

The equilibrium structure of uracil has been investigated using both theoretical and experimental data. With respect to the former, quantum-chemical calculations at the coupled-cluster level in conjunction with a triple-zeta basis set have been carried out. Extrapolation to the basis set limit, performed employing the second-order Møller-Plesset perturbation theory, and inclusion of core-correlation and diffuse-function corrections have also been considered. Based on the available rotational constants for various isotopic species together with corresponding computed vibrational corrections, the semi-experimental equilibrium structure of uracil has been determined for the first time. Theoretical and semi-experimental structures have been found in remarkably good agreement, thus pointing out the limitations of previous experimental determinations. Molecular and spectroscopic properties of uracil have then been studied by means of the composite computational approach introduced for the molecular structure evaluation. Among the results achieved, we mention the revision of the dipole moment. On the whole, it has been proved that the computational procedure presented is able to provide parameters with the proper accuracy to support experimental investigations of large molecules of biological interest.

15. Sample size affects 13C-18O clumping in CO2 derived from phosphoric acid digestion of carbonates

Science.gov (United States)

Wacker, U.; Fiebig, J.

2011-12-01

In the recent past, clumped isotope analysis of carbonates has become an important tool for terrestrial and marine paleoclimate reconstructions. For this purpose, 47/44 ratios of CO2 derived from phosphoric acid digestion of carbonates are measured. These values are compared to the corresponding stochastic 47/44 distribution ratios computed from determined δ13C and δ18O values, with the deviation being finally expressed as Δ47. For carbonates precipitated in equilibrium with its parental water, the magnitude of Δ47 is a function of temperature only. This technique bases on the fact that the isotopic fractionation associated with phosphoric acid digestion of carbonates is kinetically controlled. In this way, the concentration of 13C-18O bondings in the evolved CO2 remains proportional to the number of corresponding bondings inside the carbonate lattice. A relationship between carbonate growth temperature and Δ47 has recently been determined experimentally by Ghosh et al. (2006)1, who performed the carbonate digestion with 103% H3PO4 at 25°C after precipitating the carbonates inorganically at temperatures ranging from 1-50°C. In order to investigate the kinetic parameters associated with the phosphoric acid digestion reaction at 25°C, we have analyzed several natural carbonates at varying sample sizes. Amongst these are NBS 19, internal Carrara marbel, Arctica islandica and cold seep carbonates. Sample size was varied between 4 and 12mg. All samples exhibit a systematic trend to increasing Δ47 values with decreasing sample size, with absolute variations being restricted to ≤0.10%. Additional tests imply that this effect is related to the phosphoric acid digestion reaction. Most presumably, either the kinetic fractionation factor expressing the differences in 47/44 ratios between evolved CO2 and parental carbonate slightly depends on the concentration of the digested carbonate or traces of water exchange with C-O-bearing species inside the acid, similar to

16. STELLAR POPULATIONS FROM SPECTROSCOPY OF A LARGE SAMPLE OF QUIESCENT GALAXIES AT Z > 1: MEASURING THE CONTRIBUTION OF PROGENITOR BIAS TO EARLY SIZE GROWTH

Energy Technology Data Exchange (ETDEWEB)

Belli, Sirio; Ellis, Richard S. [Department of Astronomy, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Newman, Andrew B. [The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101 (United States)

2015-02-01

We analyze the stellar populations of a sample of 62 massive (log M {sub *}/M {sub ☉} > 10.7) galaxies in the redshift range 1 < z < 1.6, with the main goal of investigating the role of recent quenching in the size growth of quiescent galaxies. We demonstrate that our sample is not biased toward bright, compact, or young galaxies, and thus is representative of the overall quiescent population. Our high signal-to-noise ratio Keck/LRIS spectra probe the rest-frame Balmer break region that contains important absorption line diagnostics of recent star formation activity. We obtain improved measures of the various stellar population parameters, including the star formation timescale τ, age, and dust extinction, by fitting templates jointly to both our spectroscopic and broadband photometric data. We identify which quiescent galaxies were recently quenched and backtrack their individual evolving trajectories on the UVJ color-color plane finding evidence for two distinct quenching routes. By using sizes measured in the previous paper of this series, we confirm that the largest galaxies are indeed among the youngest at a given redshift. This is consistent with some contribution to the apparent growth from recent arrivals, an effect often called progenitor bias. However, we calculate that recently quenched objects can only be responsible for about half the increase in average size of quiescent galaxies over a 1.5 Gyr period, corresponding to the redshift interval 1.25 < z < 2. The remainder of the observed size evolution arises from a genuine growth of long-standing quiescent galaxies.

17. Assessment of minimum sample sizes required to adequately represent diversity reveals inadequacies in datasets of domestic dog mitochondrial DNA.

Science.gov (United States)

Webb, Kristen; Allard, Marc

2010-02-01

Evolutionary and forensic studies commonly choose the mitochondrial control region as the locus for which to evaluate the domestic dog. However, the number of dogs that need to be sampled in order to represent the control region variation present in the worldwide population is yet to be determined. Following the methods of Pereira et al. (2004), we have demonstrated the importance of surveying the complete control region rather than only the popular left domain. We have also evaluated sample saturation in terms of the haplotype number and the number of polymorphisms within the control region. Of the most commonly cited evolutionary research, only a single study has adequately surveyed the domestic dog population, while all forensic studies have failed to meet the minimum values. We recommend that future studies consider dataset size when designing experiments and ideally sample both domains of the control region in an appropriate number of domestic dogs.

18. The impact of fluctuations and correlations in droplet growth by collision-coalescence revisited - Part 1: Numerical calculation of post-gel droplet size distribution

Science.gov (United States)

Alfonso, Lester; Raga, Graciela B.

2017-06-01

The impact of stochastic fluctuations in cloud droplet growth is a matter of broad interest, since stochastic effects are one of the possible explanations of how cloud droplets cross the size gap and form the raindrop embryos that trigger warm rain development in cumulus clouds. Most theoretical studies on this topic rely on the use of the kinetic collection equation, or the Gillespie stochastic simulation algorithm. However, the kinetic collection equation is a deterministic equation with no stochastic fluctuations. Moreover, the traditional calculations using the kinetic collection equation are not valid when the system undergoes a transition from a continuous distribution to a distribution plus a runaway raindrop embryo (known as the sol-gel transition). On the other hand, the stochastic simulation algorithm, although intrinsically stochastic, fails to adequately reproduce the large end of the droplet size distribution due to the huge number of realizations required. Therefore, the full stochastic description of cloud droplet growth must be obtained from the solution of the master equation for stochastic coalescence. In this study the master equation is used to calculate the evolution of the droplet size distribution after the sol-gel transition. These calculations show that after the formation of the raindrop embryo, the expected droplet mass distribution strongly differs from the results obtained with the kinetic collection equation. Furthermore, the low-mass bins and bins from the gel fraction are strongly anticorrelated in the vicinity of the critical time, this being one of the possible explanations for the differences between the kinetic and stochastic approaches after the sol-gel transition. Calculations performed within the stochastic framework provide insight into the inability of explicit microphysics cloud models to explain the droplet spectral broadening observed in small, warm clouds.

19. The impact of fluctuations and correlations in droplet growth by collision–coalescence revisited – Part 1: Numerical calculation of post-gel droplet size distribution

Directory of Open Access Journals (Sweden)

L. Alfonso

2017-06-01

Full Text Available The impact of stochastic fluctuations in cloud droplet growth is a matter of broad interest, since stochastic effects are one of the possible explanations of how cloud droplets cross the size gap and form the raindrop embryos that trigger warm rain development in cumulus clouds. Most theoretical studies on this topic rely on the use of the kinetic collection equation, or the Gillespie stochastic simulation algorithm. However, the kinetic collection equation is a deterministic equation with no stochastic fluctuations. Moreover, the traditional calculations using the kinetic collection equation are not valid when the system undergoes a transition from a continuous distribution to a distribution plus a runaway raindrop embryo (known as the sol–gel transition. On the other hand, the stochastic simulation algorithm, although intrinsically stochastic, fails to adequately reproduce the large end of the droplet size distribution due to the huge number of realizations required. Therefore, the full stochastic description of cloud droplet growth must be obtained from the solution of the master equation for stochastic coalescence. In this study the master equation is used to calculate the evolution of the droplet size distribution after the sol–gel transition. These calculations show that after the formation of the raindrop embryo, the expected droplet mass distribution strongly differs from the results obtained with the kinetic collection equation. Furthermore, the low-mass bins and bins from the gel fraction are strongly anticorrelated in the vicinity of the critical time, this being one of the possible explanations for the differences between the kinetic and stochastic approaches after the sol–gel transition. Calculations performed within the stochastic framework provide insight into the inability of explicit microphysics cloud models to explain the droplet spectral broadening observed in small, warm clouds.

20. How taxonomic diversity, community structure, and sample size determine the reliability of higher taxon surrogates.

Science.gov (United States)

Neeson, Thomas M; Van Rijn, Itai; Mandelik, Yael

2013-07-01

Ecologists and paleontologists often rely on higher taxon surrogates instead of complete inventories of biological diversity. Despite their intrinsic appeal, the performance of these surrogates has been markedly inconsistent across empirical studies, to the extent that there is no consensus on appropriate taxonomic resolution (i.e., whether genus- or family-level categories are more appropriate) or their overall usefulness. A framework linking the reliability of higher taxon surrogates to biogeographic setting would allow for the interpretation of previously published work and provide some needed guidance regarding the actual application of these surrogates in biodiversity assessments, conservation planning, and the interpretation of the fossil record. We developed a mathematical model to show how taxonomic diversity, community structure, and sampling effort together affect three measures of higher taxon performance: the correlation between species and higher taxon richness, the relative shapes and asymptotes of species and higher taxon accumulation curves, and the efficiency of higher taxa in a complementarity-based reserve-selection algorithm. In our model, higher taxon surrogates performed well in communities in which a few common species were most abundant, and less well in communities with many equally abundant species. Furthermore, higher taxon surrogates performed well when there was a small mean and variance in the number of species per higher taxa. We also show that empirically measured species-higher-taxon correlations can be partly spurious (i.e., a mathematical artifact), except when the species accumulation curve has reached an asymptote. This particular result is of considerable practical interest given the widespread use of rapid survey methods in biodiversity assessment and the application of higher taxon methods to taxa in which species accumulation curves rarely reach an asymptote, e.g., insects.

1. Conducting EQ-5D Valuation Studies in Resource-Constrained Countries: The Potential Use of Shrinkage Estimators to Reduce Sample Size.

Science.gov (United States)

Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M

2018-01-01

Resource-constrained countries have difficulty conducting large EQ-5D valuation studies, which limits their ability to conduct cost-utility analyses using a value set specific to their own population. When estimates of similar but related parameters are available, shrinkage estimators reduce uncertainty and yield estimators with smaller mean square error (MSE). We hypothesized that health utilities based on shrinkage estimators can reduce MSE and mean absolute error (MAE) when compared to country-specific health utilities. We conducted a simulation study (1,000 iterations) based on the observed means and standard deviations (or standard errors) of the EQ-5D-3L valuation studies from 14 counties. In each iteration, the simulated data were fitted with the model based on the country-specific functional form of the scoring algorithm to create country-specific health utilities ("naïve" estimators). Shrinkage estimators were calculated based on the empirical Bayes estimation methods. The performance of shrinkage estimators was compared with those of the naïve estimators over a range of different sample sizes based on MSE, MAE, mean bias, standard errors and the width of confidence intervals. The MSE of the shrinkage estimators was smaller than the MSE of the naïve estimators on average, as theoretically predicted. Importantly, the MAE of the shrinkage estimators was also smaller than the MAE of the naïve estimators on average. In addition, the reduction in MSE with the use of shrinkage estimators did not substantially increase bias. The degree of reduction in uncertainty by shrinkage estimators is most apparent in valuation studies with small sample size. Health utilities derived from shrinkage estimation allow valuation studies with small sample size to "borrow strength" from other valuation studies to reduce uncertainty.

2. Prediction errors in learning drug response from gene expression data - influence of labeling, sample size, and machine learning algorithm.

Directory of Open Access Journals (Sweden)

Immanuel Bayer

Full Text Available Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50 of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment.

3. Influence of pH, Temperature and Sample Size on Natural and Enforced Syneresis of Precipitated Silica

Directory of Open Access Journals (Sweden)

Sebastian Wilhelm

2015-12-01

Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.

4. Spatial Distribution and Minimum Sample Size for Overwintering Larvae of the Rice Stem Borer Chilo suppressalis (Walker) in Paddy Fields.

Science.gov (United States)

Arbab, A

2014-10-01

The rice stem borer, Chilo suppressalis (Walker), feeds almost exclusively in paddy fields in most regions of the world. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling procedures, and adopting precise agricultural techniques. Field experiments were conducted during 2011 and 2012 to estimate the spatial distribution pattern of the overwintering larvae. Data were analyzed using five distribution indices and two regression models (Taylor and Iwao). All of the indices and Taylor's model indicated random spatial distribution pattern of the rice stem borer overwintering larvae. Iwao's patchiness regression was inappropriate for our data as shown by the non-homogeneity of variance, whereas Taylor's power law fitted the data well. The coefficients of Taylor's power law for a combined 2 years of data were a = -0.1118, b = 0.9202 ± 0.02, and r (2) = 96.81. Taylor's power law parameters were used to compute minimum sample size needed to estimate populations at three fixed precision levels, 5, 10, and 25% at 0.05 probabilities. Results based on this equation parameters suggesting that minimum sample sizes needed for a precision level of 0.25 were 74 and 20 rice stubble for rice stem borer larvae when the average larvae is near 0.10 and 0.20 larvae per rice stubble, respectively.

5. Calculation of upper confidence bounds on proportion of area containing not-sampled vegetation types: An application to map unit definition for existing vegetation maps

Science.gov (United States)

Paul L. Patterson; Mark Finco

2011-01-01

This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....

6. Endocranial volume of Australopithecus africanus: new CT-based estimates and the effects of missing data and small sample size.

Science.gov (United States)

Neubauer, Simon; Gunz, Philipp; Weber, Gerhard W; Hublin, Jean-Jacques

2012-04-01

Estimation of endocranial volume in Australopithecus africanus is important in interpreting early hominin brain evolution. However, the number of individuals available for investigation is limited and most of these fossils are, to some degree, incomplete and/or distorted. Uncertainties of the required reconstruction ('missing data uncertainty') and the small sample size ('small sample uncertainty') both potentially bias estimates of the average and within-group variation of endocranial volume in A. africanus. We used CT scans, electronic preparation (segmentation), mirror-imaging and semilandmark-based geometric morphometrics to generate and reconstruct complete endocasts for Sts 5, Sts 60, Sts 71, StW 505, MLD 37/38, and Taung, and measured their endocranial volumes (EV). To get a sense of the reliability of these new EV estimates, we then used simulations based on samples of chimpanzees and humans to: (a) test the accuracy of our approach, (b) assess missing data uncertainty, and (c) appraise small sample uncertainty. Incorporating missing data uncertainty of the five adult individuals, A. africanus was found to have an average adult endocranial volume of 454-461 ml with a standard deviation of 66-75 ml. EV estimates for the juvenile Taung individual range from 402 to 407 ml. Our simulations show that missing data uncertainty is small given the missing portions of the investigated fossils, but that small sample sizes are problematic for estimating species average EV. It is important to take these uncertainties into account when different fossil groups are being compared. Copyright © 2012 Elsevier Ltd. All rights reserved.

7. Arecibo Radar Observation of Near-Earth Asteroids: Expanded Sample Size, Determination of Radar Albedos, and Measurements of Polarization Ratios

Science.gov (United States)

Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.

2017-10-01

8. A method to separate conservative and magnetically-induced electric fields in calculations for MRI and MRS in electrically-small samples.

Science.gov (United States)

Park, BuSik; Webb, Andrew G; Collins, Christopher M

2009-08-01

This work presents a method to separately analyze the conservative electric fields (E(c), primarily originating with the scalar electric potential in the coil winding), and the magnetically-induced electric fields (E(i), caused by the time-varying magnetic field B1) within samples that are much smaller than one wavelength at the frequency of interest. The method consists of first using a numerical simulation method to calculate the total electric field (E(t)) and conduction currents (J), then calculating E(i) based on J, and finally calculating E(c) by subtracting E(i) from E(t). The method was applied to calculate electric fields for a small cylindrical sample in a solenoid at 600MHz. When a non-conductive sample was modeled, calculated values of E(i) and E(c) were at least in rough agreement with very simple analytical approximations. When the sample was given dielectric and/or conductive properties, E(c) was seen to decrease, but still remained much larger than E(i). When a recently-published approach to reduce heating by placing a passive conductor in the shape of a slotted cylinder between the coil and sample was modeled, reduced E(c) and improved B1 homogeneity within the sample resulted, in agreement with the published results.

9. Capture efficiency and size selectivity of sampling gears targeting red-swamp crayfish in several freshwater habitats

Directory of Open Access Journals (Sweden)

Paillisson J.-M.

2011-05-01

Full Text Available The ecological importance of the red-swamp crayfish (Procambarus clarkii in the functioning of freshwater aquatic ecosystems is becoming more evident. It is important to know the limitations of sampling methods targeting this species, because accurate determination of population characteristics is required for predicting the ecological success of P. clarkii and its potential impacts on invaded ecosystems. In the current study, we addressed the question of trap efficiency by comparing population structure provided by eight trap devices (varying in number and position of entrances, mesh size, trap size and construction materials in three habitats (a pond, a reed bed and a grassland in a French marsh in spring 2010. Based on a large collection of P. clarkii (n = 2091, 272 and 213 respectively in the pond, reed bed and grassland habitats, we found that semi-cylindrical traps made from 5.5 mm mesh galvanized steel wire (SCG were the most efficient in terms of catch probability (96.7–100% compared to 15.7–82.8% depending on trap types and habitats and catch-per-unit effort (CPUE: 15.3, 6.0 and 5.1 crayfish·trap-1·24 h-1 compared to 0.2–4.4, 2.9 and 1.7 crayfish·trap-1·24 h-1 by the other types of fishing gear in the pond, reed bed and grassland respectively. The SCG trap was also the most effective for sampling all size classes, especially small individuals (carapace length \\hbox{$\\leqslant 30$} ⩽ 30 mm. Sex ratio was balanced in all cases. SCG could be considered as appropriate trapping gear to likely give more realistic information about P. clarkii population characteristics than many other trap types. Further investigation is needed to assess the catching effort required for ultimately proposing a standardised sampling method in a large range of habitats.