WorldWideScience

Sample records for sample size target

  1. Sample size calculations for clinical trials targeting tauopathies: A new potential disease target

    Science.gov (United States)

    Whitwell, Jennifer L.; Duffy, Joseph R.; Strand, Edythe A.; Machulda, Mary M.; Tosakulwong, Nirubol; Weigand, Stephen D.; Senjem, Matthew L.; Spychalla, Anthony J.; Gunter, Jeffrey L.; Petersen, Ronald C.; Jack, Clifford R.; Josephs, Keith A.

    2015-01-01

    Disease-modifying therapies are being developed to target tau pathology, and should, therefore, be tested in primary tauopathies. We propose that progressive apraxia of speech should be considered one such target group. In this study, we investigate potential neuroimaging and clinical outcome measures for progressive apraxia of speech and determine sample size estimates for clinical trials. We prospectively recruited 24 patients with progressive apraxia of speech who underwent two serial MRI with an interval of approximately two years. Detailed speech and language assessments included the Apraxia of Speech Rating Scale (ASRS) and Motor Speech Disorders (MSD) severity scale. Rates of ventricular expansion and rates of whole brain, striatal and midbrain atrophy were calculated. Atrophy rates across 38 cortical regions were also calculated and the regions that best differentiated patients from controls were selected. Sample size estimates required to power placebo-controlled treatment trials were calculated. The smallest sample size estimates were obtained with rates of atrophy of the precentral gyrus and supplementary motor area, with both measures requiring less than 50 subjects per arm to detect a 25% treatment effect with 80% power. These measures outperformed the other regional and global MRI measures and the clinical scales. Regional rates of cortical atrophy therefore provide the best outcome measures in progressive apraxia of speech. The small sample size estimates demonstrate feasibility for including progressive apraxia of speech in future clinical treatment trials targeting tau. PMID:26076744

  2. Sample size for beginners.

    OpenAIRE

    Florey, C D

    1993-01-01

    The common failure to include an estimation of sample size in grant proposals imposes a major handicap on applicants, particularly for those proposing work in any aspect of research in the health services. Members of research committees need evidence that a study is of adequate size for there to be a reasonable chance of a clear answer at the end. A simple illustrated explanation of the concepts in determining sample size should encourage the faint hearted to pay more attention to this increa...

  3. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  4. Determination of Sample Size

    OpenAIRE

    Naing, Nyi Nyi

    2003-01-01

    There is a particular importance of determining a basic minimum required ‘n’ size of the sample to recognize a particular measurement of a particular population. This article has highlighted the determination of an appropriate size to estimate population parameters.

  5. Sample size for beginners.

    Science.gov (United States)

    Florey, C D

    1993-05-01

    The common failure to include an estimation of sample size in grant proposals imposes a major handicap on applicants, particularly for those proposing work in any aspect of research in the health services. Members of research committees need evidence that a study is of adequate size for there to be a reasonable chance of a clear answer at the end. A simple illustrated explanation of the concepts in determining sample size should encourage the faint hearted to pay more attention to this increasingly important aspect of grantsmanship.

  6. Ethics and sample size.

    Science.gov (United States)

    Bacchetti, Peter; Wolf, Leslie E; Segal, Mark R; McCulloch, Charles E

    2005-01-15

    The belief is widespread that studies are unethical if their sample size is not large enough to ensure adequate power. The authors examine how sample size influences the balance that determines the ethical acceptability of a study: the balance between the burdens that participants accept and the clinical or scientific value that a study can be expected to produce. The average projected burden per participant remains constant as the sample size increases, but the projected study value does not increase as rapidly as the sample size if it is assumed to be proportional to power or inversely proportional to confidence interval width. This implies that the value per participant declines as the sample size increases and that smaller studies therefore have more favorable ratios of projected value to participant burden. The ethical treatment of study participants therefore does not require consideration of whether study power is less than the conventional goal of 80% or 90%. Lower power does not make a study unethical. The analysis addresses only ethical acceptability, not optimality; large studies may be desirable for other than ethical reasons.

  7. Capture efficiency and size selectivity of sampling gears targeting red-swamp crayfish in several freshwater habitats

    Directory of Open Access Journals (Sweden)

    Paillisson J.-M.

    2011-05-01

    Full Text Available The ecological importance of the red-swamp crayfish (Procambarus clarkii in the functioning of freshwater aquatic ecosystems is becoming more evident. It is important to know the limitations of sampling methods targeting this species, because accurate determination of population characteristics is required for predicting the ecological success of P. clarkii and its potential impacts on invaded ecosystems. In the current study, we addressed the question of trap efficiency by comparing population structure provided by eight trap devices (varying in number and position of entrances, mesh size, trap size and construction materials in three habitats (a pond, a reed bed and a grassland in a French marsh in spring 2010. Based on a large collection of P. clarkii (n = 2091, 272 and 213 respectively in the pond, reed bed and grassland habitats, we found that semi-cylindrical traps made from 5.5 mm mesh galvanized steel wire (SCG were the most efficient in terms of catch probability (96.7–100% compared to 15.7–82.8% depending on trap types and habitats and catch-per-unit effort (CPUE: 15.3, 6.0 and 5.1 crayfish·trap-1·24 h-1 compared to 0.2–4.4, 2.9 and 1.7 crayfish·trap-1·24 h-1 by the other types of fishing gear in the pond, reed bed and grassland respectively. The SCG trap was also the most effective for sampling all size classes, especially small individuals (carapace length \\hbox{$\\leqslant 30$} ⩽ 30 mm. Sex ratio was balanced in all cases. SCG could be considered as appropriate trapping gear to likely give more realistic information about P. clarkii population characteristics than many other trap types. Further investigation is needed to assess the catching effort required for ultimately proposing a standardised sampling method in a large range of habitats.

  8. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  9. Biostatistics Series Module 5: Determining Sample Size.

    Science.gov (United States)

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful.

  10. Sample size estimation and sampling techniques for selecting a representative sample

    OpenAIRE

    Aamir Omair

    2014-01-01

    Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect ...

  11. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  12. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  13. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  14. Medium size polarised deuteron target

    Energy Technology Data Exchange (ETDEWEB)

    Kiselev, Yu.F.; Polyakov, V.V.; Kovalev, A.I.; Bunyatova, E.I.; Borisov, N.S.; Trautman, V.Yu.; Werner, K.; Kozlenko, N.G.

    1984-03-01

    A frozen polarised deuteron target based on ethanediol with a high percentage of deuterium is described. Analytical expressions for the NMR spectrum correction for non-linearity of the Q-meter are obtained and a method for the determination of the asymmetry is developed. Experimental results confirm the thermal mixing theory for deuteron and proton spin systems with a dipole-dipole reservoir of electron spins.

  15. How to calculate sample size and why.

    Science.gov (United States)

    Kim, Jeehyoung; Seo, Bong Soo

    2013-09-01

    Calculating the sample size is essential to reduce the cost of a study and to prove the hypothesis effectively. Referring to pilot studies and previous research studies, we can choose a proper hypothesis and simplify the studies by using a website or Microsoft Excel sheet that contains formulas for calculating sample size in the beginning stage of the study. There are numerous formulas for calculating the sample size for complicated statistics and studies, but most studies can use basic calculating methods for sample size calculation.

  16. Sample size determination for the fluctuation experiment.

    Science.gov (United States)

    Zheng, Qi

    2017-01-01

    The Luria-Delbrück fluctuation experiment protocol is increasingly employed to determine microbial mutation rates in the laboratory. An important question raised at the planning stage is "How many cultures are needed?" For over 70 years sample sizes have been determined either by intuition or by following published examples where sample sizes were chosen intuitively. This paper proposes a practical method for determining the sample size. The proposed method relies on existing algorithms for computing the expected Fisher information under two commonly used mutant distributions. The role of partial plating in reducing sample size is discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Additional Considerations in Determining Sample Size.

    Science.gov (United States)

    Levin, Joel R.; Subkoviak, Michael J.

    Levin's (1975) sample-size determination procedure for completely randomized analysis of variance designs is extended to designs in which antecedent or blocking variables information is considered. In particular, a researcher's choice of designs is framed in terms of determining the respective sample sizes necessary to detect specified contrasts…

  18. Determining Sample Size for Research Activities

    Science.gov (United States)

    Krejcie, Robert V.; Morgan, Daryle W.

    1970-01-01

    A formula for determining sample size, which originally appeared in 1960, has lacked a table for easy reference. This article supplies a graph of the function and a table of values which permits easy determination of the size of sample needed to be representative of a given population. (DG)

  19. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...

  20. Basic Statistical Concepts for Sample Size Estimation

    Directory of Open Access Journals (Sweden)

    Vithal K Dhulkhed

    2008-01-01

    Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.

  1. Particle size distribution in ground biological samples.

    Science.gov (United States)

    Koglin, D; Backhaus, F; Schladot, J D

    1997-05-01

    Modern trace and retrospective analysis of Environmental Specimen Bank (ESB) samples require surplus material prepared and characterized as reference materials. Before the biological samples could be analyzed and stored for long periods at cryogenic temperatures, the materials have to be pre-crushed. As a second step, a milling and homogenization procedure has to follow. For this preparation, a grinding device is cooled with liquid nitrogen to a temperature of -190 degrees C. It is a significant condition for homogeneous samples that at least 90% of the particles should be smaller than 200 microns. In the German ESB the particle size distribution of the processed material is determined by means of a laser particle sizer. The decrease of particle sizes of deer liver and bream muscles after different grinding procedures as well as the consequences of ultrasonic treatment of the sample before particle size measurements have been investigated.

  2. Determining sample size for tree utilization surveys

    Science.gov (United States)

    Stanley J. Zarnoch; James W. Bentley; Tony G. Johnson

    2004-01-01

    The U.S. Department of Agriculture Forest Service has conducted many studies to determine what proportion of the timber harvested in the South is actually utilized. This paper describes the statistical methods used to determine required sample sizes for estimating utilization ratios for a required level of precision. The data used are those for 515 hardwood and 1,557...

  3. Improving your Hypothesis Testing: Determining Sample Sizes.

    Science.gov (United States)

    Luftig, Jeffrey T.; Norton, Willis P.

    1982-01-01

    This article builds on an earlier discussion of the importance of the Type II error (beta) and power to the hypothesis testing process (CE 511 484), and illustrates the methods by which sample size calculations should be employed so as to improve the research process. (Author/CT)

  4. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  5. Determining sample size when assessing mean equivalence.

    Science.gov (United States)

    Asberg, Arne; Solem, Kristine B; Mikkelsen, Gustav

    2014-11-01

    When we want to assess whether two analytical methods are equivalent, we could test if the difference between the mean results is within the specification limits of 0 ± an acceptance criterion. Testing the null hypothesis of zero difference is less interesting, and so is the sample size estimation based on testing that hypothesis. Power function curves for equivalence testing experiments are not widely available. In this paper we present power function curves to help decide on the number of measurements when testing equivalence between the means of two analytical methods. Computer simulation was used to calculate the probability that the 90% confidence interval for the difference between the means of two analytical methods would exceed the specification limits of 0 ± 1, 0 ± 2 or 0 ± 3 analytical standard deviations (SDa), respectively. The probability of getting a nonequivalence alarm increases with increasing difference between the means when the difference is well within the specification limits. The probability increases with decreasing sample size and with smaller acceptance criteria. We may need at least 40-50 measurements with each analytical method when the specification limits are 0 ± 1 SDa, and 10-15 and 5-10 when the specification limits are 0 ± 2 and 0 ± 3 SDa, respectively. The power function curves provide information of the probability of false alarm, so that we can decide on the sample size under less uncertainty.

  6. Sample size calculations for skewed distributions.

    Science.gov (United States)

    Cundill, Bonnie; Alexander, Neal D E

    2015-04-02

    Sample size calculations should correspond to the intended method of analysis. Nevertheless, for non-normal distributions, they are often done on the basis of normal approximations, even when the data are to be analysed using generalized linear models (GLMs). For the case of comparison of two means, we use GLM theory to derive sample size formulae, with particular cases being the negative binomial, Poisson, binomial, and gamma families. By simulation we estimate the performance of normal approximations, which, via the identity link, are special cases of our approach, and for common link functions such as the log. The negative binomial and gamma scenarios are motivated by examples in hookworm vaccine trials and insecticide-treated materials, respectively. Calculations on the link function (log) scale work well for the negative binomial and gamma scenarios examined and are often superior to the normal approximations. However, they have little advantage for the Poisson and binomial distributions. The proposed method is suitable for sample size calculations for comparisons of means of highly skewed outcome variables.

  7. Defining sample size and sampling strategy for dendrogeomorphic rockfall reconstructions

    Science.gov (United States)

    Morel, Pauline; Trappmann, Daniel; Corona, Christophe; Stoffel, Markus

    2015-05-01

    Optimized sampling strategies have been recently proposed for dendrogeomorphic reconstructions of mass movements with a large spatial footprint, such as landslides, snow avalanches, and debris flows. Such guidelines have, by contrast, been largely missing for rockfalls and cannot be transposed owing to the sporadic nature of this process and the occurrence of individual rocks and boulders. Based on a data set of 314 European larch (Larix decidua Mill.) trees (i.e., 64 trees/ha), growing on an active rockfall slope, this study bridges this gap and proposes an optimized sampling strategy for the spatial and temporal reconstruction of rockfall activity. Using random extractions of trees, iterative mapping, and a stratified sampling strategy based on an arbitrary selection of trees, we investigate subsets of the full tree-ring data set to define optimal sample size and sampling design for the development of frequency maps of rockfall activity. Spatially, our results demonstrate that the sampling of only 6 representative trees per ha can be sufficient to yield a reasonable mapping of the spatial distribution of rockfall frequencies on a slope, especially if the oldest and most heavily affected individuals are included in the analysis. At the same time, however, sampling such a low number of trees risks causing significant errors especially if nonrepresentative trees are chosen for analysis. An increased number of samples therefore improves the quality of the frequency maps in this case. Temporally, we demonstrate that at least 40 trees/ha are needed to obtain reliable rockfall chronologies. These results will facilitate the design of future studies, decrease the cost-benefit ratio of dendrogeomorphic studies and thus will permit production of reliable reconstructions with reasonable temporal efforts.

  8. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    Science.gov (United States)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  9. Sample Size Growth with an Increasing Number of Comparisons

    Directory of Open Access Journals (Sweden)

    Chi-Hong Tseng

    2012-01-01

    Full Text Available An appropriate sample size is crucial for the success of many studies that involve a large number of comparisons. Sample size formulas for testing multiple hypotheses are provided in this paper. They can be used to determine the sample sizes required to provide adequate power while controlling familywise error rate or false discovery rate, to derive the growth rate of sample size with respect to an increasing number of comparisons or decrease in effect size, and to assess reliability of study designs. It is demonstrated that practical sample sizes can often be achieved even when adjustments for a large number of comparisons are made as in many genomewide studies.

  10. A medium size polarised deuteron target

    Science.gov (United States)

    Kiselev, Yu. F.; Polyakov, V. V.; Kovalev, A. I.; Bunyatova, E. I.; Borisov, N. S.; Trautman, V. Yu.; Werner, K.; Kozlenko, N. G.

    1984-03-01

    A frozen polarised deuteron target based on ethanediol with a high percentage of deuterium is described. Analytical expressions for the NMR spectrum correction for non-linearity of the Q-meter are obtained and a method for the determination of the asymmetry is developed. Experimental results confirm the thermal mixing theory for deuteron and proton spin systems with a dipole-dipole reservoir of electron spins.

  11. An expert system for the calculation of sample size.

    Science.gov (United States)

    Ebell, M H; Neale, A V; Hodgkins, B J

    1994-06-01

    Calculation of sample size is a useful technique for researchers who are designing a study, and for clinicians who wish to interpret research findings. The elements that must be specified to calculate the sample size include alpha, beta, Type I and Type II errors, 1- and 2-tail tests, confidence intervals, and confidence levels. A computer software program written by one of the authors (MHE), Sample Size Expert, facilitates sample size calculations. The program uses an expert system to help inexperienced users calculate sample sizes for analytic and descriptive studies. The software is available at no cost from the author or electronically via several on-line information services.

  12. Optimal flexible sample size design with robust power.

    Science.gov (United States)

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Planning Educational Research: Determining the Necessary Sample Size.

    Science.gov (United States)

    Olejnik, Stephen F.

    1984-01-01

    This paper discusses the sample size problem and four factors affecting its solution: significance level, statistical power, analysis procedure, and effect size. The interrelationship between these factors is discussed and demonstrated by calculating minimal sample size requirements for a variety of research conditions. (Author)

  14. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  15. Sample size determination in medical and surgical research.

    Science.gov (United States)

    Flikkema, Robert M; Toledo-Pereyra, Luis H

    2012-02-01

    One of the most critical yet frequently misunderstood principles of research is sample size determination. Obtaining an inadequate sample is a serious problem that can invalidate an entire study. Without an extensive background in statistics, the seemingly simple question of selecting a sample size can become quite a daunting task. This article aims to give a researcher with no background in statistics the basic tools needed for sample size determination. After reading this article, the researcher will be aware of all the factors involved in a power analysis and will be able to work more effectively with the statistician when determining sample size. This work also reviews the power of a statistical hypothesis, as well as how to estimate the effect size of a research study. These are the two key components of sample size determination. Several examples will be considered throughout the text.

  16. A review of software for sample size determination.

    Science.gov (United States)

    Dattalo, Patrick

    2009-09-01

    The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities.

  17. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  18. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    Science.gov (United States)

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  19. Determination of the optimal sample size for a clinical trial accounting for the population size

    Science.gov (United States)

    Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2016-01-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision‐theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two‐arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. PMID:27184938

  20. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  1. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  2. Estimating population size with correlated sampling unit estimates

    Science.gov (United States)

    David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

    2003-01-01

    Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on mark–recapture or distance sampling methods occur...

  3. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    sample size for case–control association studies is discussed. Materials and methods. Parameter settings. We consider a candidate locus with two alleles A and a where. A is putatively associated with the disease status (increasing. Keywords. sample size; association tests; genotype relative risk; power; autism. Journal of ...

  4. Understanding Power and Rules of Thumb for Determining Sample Sizes

    OpenAIRE

    Betsy L. Morgan; Carmen R. Wilson Van Voorhis

    2007-01-01

    This article addresses the definition of power and its relationship to Type I and Type II errors. We discuss the relationship of sample size and power. Finally, we offer statistical rules of thumb guiding the selection of sample sizes large enough for sufficient power to detecting differences, associations, chi-square, and factor analyses.

  5. Understanding Power and Rules of Thumb for Determining Sample Sizes

    Directory of Open Access Journals (Sweden)

    Betsy L. Morgan

    2007-09-01

    Full Text Available This article addresses the definition of power and its relationship to Type I and Type II errors. We discuss the relationship of sample size and power. Finally, we offer statistical rules of thumb guiding the selection of sample sizes large enough for sufficient power to detecting differences, associations, chi-square, and factor analyses.

  6. Sample Size and Statistical Power Calculation in Genetic Association Studies

    Directory of Open Access Journals (Sweden)

    Eun Pyo Hong

    2012-06-01

    Full Text Available A sample size with sufficient statistical power is critical to the success of genetic association studies to detect causal genes of human complex diseases. Genome-wide association studies require much larger sample sizes to achieve an adequate statistical power. We estimated the statistical power with increasing numbers of markers analyzed and compared the sample sizes that were required in case-control studies and case-parent studies. We computed the effective sample size and statistical power using Genetic Power Calculator. An analysis using a larger number of markers requires a larger sample size. Testing a single-nucleotide polymorphism (SNP marker requires 248 cases, while testing 500,000 SNPs and 1 million markers requires 1,206 cases and 1,255 cases, respectively, under the assumption of an odds ratio of 2, 5% disease prevalence, 5% minor allele frequency, complete linkage disequilibrium (LD, 1:1 case/control ratio, and a 5% error rate in an allelic test. Under a dominant model, a smaller sample size is required to achieve 80% power than other genetic models. We found that a much lower sample size was required with a strong effect size, common SNP, and increased LD. In addition, studying a common disease in a case-control study of a 1:4 case-control ratio is one way to achieve higher statistical power. We also found that case-parent studies require more samples than case-control studies. Although we have not covered all plausible cases in study design, the estimates of sample size and statistical power computed under various assumptions in this study may be useful to determine the sample size in designing a population-based genetic association study.

  7. Considerations in determining sample size for pilot studies.

    Science.gov (United States)

    Hertzog, Melody A

    2008-04-01

    There is little published guidance concerning how large a pilot study should be. General guidelines, for example using 10% of the sample required for a full study, may be inadequate for aims such as assessment of the adequacy of instrumentation or providing statistical estimates for a larger study. This article illustrates how confidence intervals constructed around a desired or anticipated value can help determine the sample size needed. Samples ranging in size from 10 to 40 per group are evaluated for their adequacy in providing estimates precise enough to meet a variety of possible aims. General sample size guidelines by type of aim are offered.

  8. Determining the sample size required for a community radon survey.

    Science.gov (United States)

    Chen, Jing; Tracy, Bliss L; Zielinski, Jan M; Moir, Deborah

    2008-04-01

    Radon measurements in homes and other buildings have been included in various community health surveys often dealing with only a few hundred randomly sampled households. It would be interesting to know whether such a small sample size can adequately represent the radon distribution in a large community. An analysis of radon measurement data obtained from the Winnipeg case-control study with randomly sampled subsets of different sizes has showed that a sample size of one to several hundred can serve the survey purpose well.

  9. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey.

  10. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  11. Targeted ocean sampling guidance for tropical cyclones

    Science.gov (United States)

    Chen, Sue; Cummings, James A.; Schmidt, Jerome M.; Sanabia, Elizabeth R.; Jayne, Steven R.

    2017-05-01

    A 3-D variational ocean data assimilation adjoint approach is used to examine the impact of ocean observations on coupled tropical cyclone (TC) model forecast error for three recent hurricanes: Isaac (2012), Hilda (2015), and Matthew (2016). In addition, this methodology is applied to develop an innovative ocean observation targeting tool validated using TC model simulations that assimilate ocean temperature observed by Airborne eXpendable Bathy Thermographs and Air-Launched Autonomous Micro-Observer floats. Comparison between the simulated targeted and real observation data assimilation impacts reveals a positive maximum mean linear correlation of 0.53 at 400-500 m, which implies some skill in the targeting application. Targeted ocean observation regions from these three hurricanes, however, show that the largest positive impacts in reducing the TC model forecast errors are sensitive to the initial prestorm ocean conditions such as the location and magnitude of preexisting ocean eddies, storm-induced ocean cold wake, and model track errors.

  12. Methods for sample size determination in cluster randomized trials.

    Science.gov (United States)

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

  13. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  14. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Determining the effective sample size of a parametric prior.

    Science.gov (United States)

    Morita, Satoshi; Thall, Peter F; Müller, Peter

    2008-06-01

    We present a definition for the effective sample size of a parametric prior distribution in a Bayesian model, and propose methods for computing the effective sample size in a variety of settings. Our approach first constructs a prior chosen to be vague in a suitable sense, and updates this prior to obtain a sequence of posteriors corresponding to each of a range of sample sizes. We then compute a distance between each posterior and the parametric prior, defined in terms of the curvature of the logarithm of each distribution, and the posterior minimizing the distance defines the effective sample size of the prior. For cases where the distance cannot be computed analytically, we provide a numerical approximation based on Monte Carlo simulation. We provide general guidelines for application, illustrate the method in several standard cases where the answer seems obvious, and then apply it to some nonstandard settings.

  16. Effects of Mesh Size on Sieved Samples of Corophium volutator

    Science.gov (United States)

    Crewe, Tara L.; Hamilton, Diana J.; Diamond, Antony W.

    2001-08-01

    Corophium volutator (Pallas), gammaridean amphipods found on intertidal mudflats, are frequently collected in mud samples sieved on mesh screens. However, mesh sizes used vary greatly among studies, raising the possibility that sampling methods bias results. The effect of using different mesh sizes on the resulting size-frequency distributions of Corophium was tested by collecting Corophium from mud samples with 0·5 and 0·25 mm sieves. More than 90% of Corophium less than 2 mm long passed through the larger sieve. A significantly smaller, but still substantial, proportion of 2-2·9 mm Corophium (30%) was also lost. Larger size classes were unaffected by mesh size. Mesh size significantly changed the observed size-frequency distribution of Corophium, and effects varied with sampling date. It is concluded that a 0·5 mm sieve is suitable for studies concentrating on adults, but to accurately estimate Corophium density and size-frequency distributions, a 0·25 mm sieve must be used.

  17. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  18. Planning Longitudinal Field Studies: Considerations in Determining Sample Size.

    Science.gov (United States)

    St.Pierre, Robert G.

    1980-01-01

    Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)

  19. Investigating the impact of sample size on cognate detection

    OpenAIRE

    List, Johann-Mattis

    2013-01-01

    International audience; In historical linguistics, the problem of cognate detection is traditionally approached within the frame-work of the comparative method. Since the method is usually carried out manually, it is very flexible regarding its input parameters. However, while the number of languages and the selection of comparanda is not important for the successfull application of the method, the sample size of the comparanda is. In order to shed light on the impact of sample size on cognat...

  20. Sample size requirements for training high-dimensional risk predictors.

    Science.gov (United States)

    Dobbin, Kevin K; Song, Xiao

    2013-09-01

    A common objective of biomarker studies is to develop a predictor of patient survival outcome. Determining the number of samples required to train a predictor from survival data is important for designing such studies. Existing sample size methods for training studies use parametric models for the high-dimensional data and cannot handle a right-censored dependent variable. We present a new training sample size method that is non-parametric with respect to the high-dimensional vectors, and is developed for a right-censored response. The method can be applied to any prediction algorithm that satisfies a set of conditions. The sample size is chosen so that the expected performance of the predictor is within a user-defined tolerance of optimal. The central method is based on a pilot dataset. To quantify uncertainty, a method to construct a confidence interval for the tolerance is developed. Adequacy of the size of the pilot dataset is discussed. An alternative model-based version of our method for estimating the tolerance when no adequate pilot dataset is available is presented. The model-based method requires a covariance matrix be specified, but we show that the identity covariance matrix provides adequate sample size when the user specifies three key quantities. Application of the sample size method to two microarray datasets is discussed.

  1. Sample size matters: investigating the effect of sample size on a logistic regression debris flow susceptibility model

    Science.gov (United States)

    Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

    2013-06-01

    Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial datasets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In view of these results, we

  2. Sample size matters: investigating the effect of sample size on a logistic regression susceptibility model for debris flows

    Science.gov (United States)

    Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

    2014-02-01

    Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable and reproducible results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and they approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial data sets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In

  3. Sample Size Requirements for Traditional and Regression-Based Norms.

    Science.gov (United States)

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-04-01

    Test norms enable determining the position of an individual test taker in the group. The most frequently used approach to obtain test norms is traditional norming. Regression-based norming may be more efficient than traditional norming and is rapidly growing in popularity, but little is known about its technical properties. A simulation study was conducted to compare the sample size requirements for traditional and regression-based norming by examining the 95% interpercentile ranges for percentile estimates as a function of sample size, norming method, size of covariate effects on the test score, test length, and number of answer categories in an item. Provided the assumptions of the linear regression model hold in the data, for a subdivision of the total group into eight equal-size subgroups, we found that regression-based norming requires samples 2.5 to 5.5 times smaller than traditional norming. Sample size requirements are presented for each norming method, test length, and number of answer categories. We emphasize that additional research is needed to establish sample size requirements when the assumptions of the linear regression model are violated. © The Author(s) 2015.

  4. Size-controlled synthesis of biodegradable nanocarriers for targeted ...

    Indian Academy of Sciences (India)

    1, February 2016, pp. 69–77. c Indian Academy of Sciences. Size-controlled synthesis of biodegradable nanocarriers for targeted and controlled cancer drug delivery using salting out cation. MADASAMY HARI BALAKRISHANAN and MARIAPPAN RAJAN∗. Department of Natural Products Chemistry, School of Chemistry, ...

  5. Mini-batch stochastic gradient descent with dynamic sample sizes

    OpenAIRE

    Metel, Michael R.

    2017-01-01

    We focus on solving constrained convex optimization problems using mini-batch stochastic gradient descent. Dynamic sample size rules are presented which ensure a descent direction with high probability. Empirical results from two applications show superior convergence compared to fixed sample implementations.

  6. [Atypical agents of wound infection and targeted samples].

    Science.gov (United States)

    Kucisec-Tepes, Nastja

    2012-10-01

    All open wounds are primarily contaminated and subsequently colonized by microorganisms, predominantly bacteria. Only about 30% of chronic wounds are also infected. Factors which favor the development of infection are the following: large quantity of bacteria, presence of virulence factors, their quantity and number, predominantly the synergy of aerobic and anaerobic bacteria, and formation of biofilm. Common agents of infection of acute and chronic wounds are Staphylococcus aureus, MRSA, Streptococcus beta-haemolyticus, Pseudomonas aeruginosa, Bacteroides spp., and Candida albicans. Difference between acute and chronic wound is in the predominance of individual agents, with an observation that Staphylococcus aureus is predominant in both cases. Atypical agents of chronic wound infection are rare, unusual, not found in the area in which we live, not proven by standard microbiological methods, but molecular methods are needed instead. They are predominantly opportunists, varying in the expression of virulence factors, or they have changed their phenotype characteristics and are not the agents of primary wound infections. They are the agents of secondary infections. Atypical agents of the chronic wound infection are diverse, from the anaerobe group, Peptoniphilus spp., Anaerococcus spp., Bacteroides ureolyticus, Finegoldia magma, the group of gram positive rods of the Corynebacterium genus, the group of bacteria from aquatic environment Mycobacterium fortuitum complex, and Vibrio alginolyticus. The targeted samples are biopsy sample as the "gold standard" and/or aspirate, when a significant quantity of exudate is present. Targeted samples are obligatory when there is a progression and decomposition of the base of the wound, increase in the size or depth of the wound, isolation of multiresistant microbes, or absence of clinical response to empirical antimicrobial therapy. In the diagnosis of opportunistic pathogens or atypical agents of chronic wound infection, it is

  7. Sample size formulae for the Bayesian continual reassessment method.

    Science.gov (United States)

    Cheung, Ying Kuen

    2013-01-01

    In the planning of a dose finding study, a primary design objective is to maintain high accuracy in terms of the probability of selecting the maximum tolerated dose. While numerous dose finding methods have been proposed in the literature, concrete guidance on sample size determination is lacking. With a motivation to provide quick and easy calculations during trial planning, we present closed form formulae for sample size determination associated with the use of the Bayesian continual reassessment method (CRM). We examine the sampling distribution of a nonparametric optimal design and exploit it as a proxy to empirically derive an accuracy index of the CRM using linear regression. We apply the formulae to determine the sample size of a phase I trial of PTEN-long in pancreatic cancer patients and demonstrate that the formulae give results very similar to simulation. The formulae are implemented by an R function 'getn' in the package 'dfcrm'. The results are developed for the Bayesian CRM and should be validated by simulation when used for other dose finding methods. The analytical formulae we propose give quick and accurate approximation of the required sample size for the CRM. The approach used to derive the formulae can be applied to obtain sample size formulae for other dose finding methods.

  8. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  9. Uncertainty of the sample size reduction step in pesticide residue analysis of large-sized crops.

    Science.gov (United States)

    Omeroglu, P Yolci; Ambrus, Á; Boyacioglu, D; Majzik, E Solymosne

    2013-01-01

    To estimate the uncertainty of the sample size reduction step, each unit in laboratory samples of papaya and cucumber was cut into four segments in longitudinal directions and two opposite segments were selected for further homogenisation while the other two were discarded. Jackfruit was cut into six segments in longitudinal directions, and all segments were kept for further analysis. To determine the pesticide residue concentrations in each segment, they were individually homogenised and analysed by chromatographic methods. One segment from each unit of the laboratory sample was drawn randomly to obtain 50 theoretical sub-samples with an MS Office Excel macro. The residue concentrations in a sub-sample were calculated from the weight of segments and the corresponding residue concentration. The coefficient of variation calculated from the residue concentrations of 50 sub-samples gave the relative uncertainty resulting from the sample size reduction step. The sample size reduction step, which is performed by selecting one longitudinal segment from each unit of the laboratory sample, resulted in relative uncertainties of 17% and 21% for field-treated jackfruits and cucumber, respectively, and 7% for post-harvest treated papaya. The results demonstrated that sample size reduction is an inevitable source of uncertainty in pesticide residue analysis of large-sized crops. The post-harvest treatment resulted in a lower variability because the dipping process leads to a more uniform residue concentration on the surface of the crops than does the foliar application of pesticides.

  10. Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty.

    Science.gov (United States)

    Anderson, Samantha F; Kelley, Ken; Maxwell, Scott E

    2017-11-01

    The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.

  11. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  12. Max control chart with adaptive sample sizes for jointly monitoring process mean and standard deviation

    OpenAIRE

    Ching Chun Huang

    2014-01-01

    This paper develops the two-state and three-state adaptive sample size control schemes based on the Max chart to simultaneously monitor the process mean and standard deviation. Since the Max chart is a single variables control chart where only one plotting statistic is needed, the design and operation of adaptive sample size schemes for this chart will be simpler than those for the joint [Xmacr ] and S charts. Three types of processes including on-target initial, off-target initial and steady...

  13. Radiation Target Area Sample Environmental Chamber (RTASEC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Payload Systems Inc. proposes the Radiation Target Area Sample Environmental Chamber (RTASEC) as an innovative approach enabling radiobiologists to investigate the...

  14. Clinical trials with nested subgroups: Analysis, sample size determination and internal pilot studies.

    Science.gov (United States)

    Placzek, Marius; Friede, Tim

    2017-01-01

    The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.

  15. Sample size considerations for clinical research studies in nuclear cardiology.

    Science.gov (United States)

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  16. Sample size for collecting germplasms–a polyploid model with ...

    Indian Academy of Sciences (India)

    Numerous expressions/results developed for germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to ...

  17. Sample size for collecting germplasms – a polyploid model with ...

    Indian Academy of Sciences (India)

    Unknown

    germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate.

  18. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  19. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  20. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance

    OpenAIRE

    Timothy M Morgan; Case, L. Douglas

    2013-01-01

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time.

  1. Sample Size Determinations for the Two Rater Kappa Statistic.

    Science.gov (United States)

    Flack, Virginia F.; And Others

    1988-01-01

    A method is presented for determining sample size that will achieve a pre-specified bound on confidence interval width for the interrater agreement measure "kappa." The same results can be used when a pre-specified power is desired for testing hypotheses about the value of kappa. (Author/SLD)

  2. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    Science.gov (United States)

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-11-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation.

  3. Mongoloid-Caucasoid Differences in Brain Size from Military Samples.

    Science.gov (United States)

    Rushton, J. Philippe; And Others

    1991-01-01

    Calculation of cranial capacities for the means from 4 Mongoloid and 20 Caucasoid samples (raw data from 57,378 individuals in 1978) found larger brain size for Mongoloids, a finding discussed in evolutionary terms. The conclusion is disputed by L. Willerman but supported by J. P. Rushton. (SLD)

  4. Sample size and power calculation for molecular biology studies.

    Science.gov (United States)

    Jung, Sin-Ho

    2010-01-01

    Sample size calculation is a critical procedure when designing a new biological study. In this chapter, we consider molecular biology studies generating huge dimensional data. Microarray studies are typical examples, so that we state this chapter in terms of gene microarray data, but the discussed methods can be used for design and analysis of any molecular biology studies involving high-dimensional data. In this chapter, we discuss sample size calculation methods for molecular biology studies when the discovery of prognostic molecular markers is performed by accurately controlling false discovery rate (FDR) or family-wise error rate (FWER) in the final data analysis. We limit our discussion to the two-sample case.

  5. Aerosol Sampling Bias from Differential Electrostatic Charge and Particle Size

    Science.gov (United States)

    Jayjock, Michael Anthony

    Lack of reliable epidemiological data on long term health effects of aerosols is due in part to inadequacy of sampling procedures and the attendant doubt regarding the validity of the concentrations measured. Differential particle size has been widely accepted and studied as a major potential biasing effect in the sampling of such aerosols. However, relatively little has been done to study the effect of electrostatic particle charge on aerosol sampling. The objective of this research was to investigate the possible biasing effects of differential electrostatic charge, particle size and their interaction on the sampling accuracy of standard aerosol measuring methodologies. Field studies were first conducted to determine the levels and variability of aerosol particle size and charge at two manufacturing facilities making acrylic powder. The field work showed that the particle mass median aerodynamic diameter (MMAD) varied by almost an order of magnitude (4-34 microns) while the aerosol surface charge was relatively stable (0.6-0.9 micro coulombs/m('2)). The second part of this work was a series of laboratory experiments in which aerosol charge and MMAD were manipulated in a 2('n) factorial design with the percentage of sampling bias for various standard methodologies as the dependent variable. The experiments used the same friable acrylic powder studied in the field work plus two size populations of ground quartz as a nonfriable control. Despite some ill conditioning of the independent variables due to experimental difficulties, statistical analysis has shown aerosol charge (at levels comparable to those measured in workroom air) is capable of having a significant biasing effect. Physical models consistent with the sampling data indicate that the level and bipolarity of the aerosol charge are determining factors in the extent and direction of the bias.

  6. Effects of sample size on KERNEL home range estimates

    Science.gov (United States)

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  7. A simulated Experiment for Sampling Soil Micriarthropods to Reduce Sample Size

    OpenAIRE

    Tamura, Hiroshi

    1987-01-01

    An experiment was conducted to examine a possibility of reducing the necessary sample size in a quantitative survey on soil microarthropods, using soybeans instead of animals. An artificially provided, intensely aggregated distribution pattern of soybeans was easily transformed to the random pattern by stirring the substrate, which is soil in a large cardboard box. This enabled the necessary sample size to be greatly reduced without sacrificing the statistical reliability. A new practical met...

  8. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...

  9. Sample size determination for longitudinal designs with binary response.

    Science.gov (United States)

    Kapur, Kush; Bhaumik, Runa; Tang, X Charlene; Hur, Kwan; Reda, Domenic J; Bhaumik, Dulal K

    2014-09-28

    In this article, we develop appropriate statistical methods for determining the required sample size while comparing the efficacy of an intervention to a control with repeated binary response outcomes. Our proposed methodology incorporates the complexity of the hierarchical nature of underlying designs and provides solutions when varying attrition rates are present over time. We explore how the between-subject variability and attrition rates jointly influence the computation of sample size formula. Our procedure also shows how efficient estimation methods play a crucial role in power analysis. A practical guideline is provided when information regarding individual variance component is unavailable. The validity of our methods is established by extensive simulation studies. Results are illustrated with the help of two randomized clinical trials in the areas of contraception and insomnia. Copyright © 2014 John Wiley & Sons, Ltd.

  10. A power analysis for fidelity measurement sample size determination.

    Science.gov (United States)

    Stokes, Lynne; Allor, Jill H

    2016-03-01

    The importance of assessing fidelity has been emphasized recently with increasingly sophisticated definitions, assessment procedures, and integration of fidelity data into analyses of outcomes. Fidelity is often measured through observation and coding of instructional sessions either live or by video. However, little guidance has been provided about how to determine the number of observations needed to precisely measure fidelity. We propose a practical method for determining a reasonable sample size for fidelity data collection when fidelity assessment requires observation. The proposed methodology is based on consideration of the power of tests of the treatment effect of outcome itself, as well as of the relationship between fidelity and outcome. It makes use of the methodology of probability sampling from a finite population, because the fidelity parameters of interest are estimated over a specific, limited time frame using a sample. For example, consider a fidelity measure defined as the number of minutes of exposure to a treatment curriculum during the 36 weeks of the study. In this case, the finite population is the 36 sessions, the parameter (number of minutes over the entire 36 sessions) is a total, and the sample is the observed sessions. Software for the sample size calculation is provided. (c) 2016 APA, all rights reserved).

  11. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    *E-mail: yeshurun@mail.biu.ac.il. Abstract. Effects of sample size on the second magnetization peak (SMP) in. Bi2Sr2CaCuO8+δ crystals are ... a termination of the measured transition line at Tl, typically 17–20 K (see figure 1). The obscuring and eventual disappearance of the SMP with decreasing tempera- tures has been ...

  12. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    OpenAIRE

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height a...

  13. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. MetSizeR: selecting the optimal sample size for metabolomic studies using an analysis based approach

    OpenAIRE

    Nyamundanda, Gift; Gormley, Isobel Claire; Fan, Yue; Gallagher, William M.; Brennan, Lorraine

    2013-01-01

    Background: Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. Results: In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experime...

  16. Solution-based targeted genomic enrichment for precious DNA samples

    Directory of Open Access Journals (Sweden)

    Shearer Aiden

    2012-05-01

    Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.

  17. It's in the Sample: The Effects of Sample Size and Sample Diversity on the Breadth of Inductive Generalization

    Science.gov (United States)

    Lawson, Chris A.; Fisher, Anna V.

    2011-01-01

    Developmental studies have provided mixed evidence with regard to the question of whether children consider sample size and sample diversity in their inductive generalizations. Results from four experiments with 105 undergraduates, 105 school-age children (M = 7.2 years), and 105 preschoolers (M = 4.9 years) showed that preschoolers made a higher…

  18. Radiation inactivation target size of rat adipocyte glucose transporter

    Energy Technology Data Exchange (ETDEWEB)

    Jung, C.Y.; Jacobs, D.B.; Berenski, C.J.; Spangler, R.A.

    1987-05-01

    In situ assembly states of rat adipocyte glucose transport protein in plasma membrane (PM) and in microsomal pool (MM) were assessed by measuring target size (TS) of D glucose-sensitive, cytochalasin B binding activity. High energy radiation inactivated the binding in both PM and MM by reducing the total capacity of the binding (B/sub T/) without affecting the dissociation constant (K/sub D/). The reduction in B/sub T/ as a function of radiation dose was analyzed based on classical target theory, from which TS was calculated. TS in the PM of insulin-treated adipocytes was 58 KDa. TS in the MM of noninsulin-treated and insulin-treated adipocytes were 112 and 109 KDa, respectively. With MM, however, inactivation data showed anomalously low radiation sensitivities at low radiation doses showing a shoulder in the semilog plots, which may be due to an interaction with a radiation sensitive inhibitor. With these results, they propose the following model: Adipocyte glucose transporter, while exists as a monomer (T) in PM, occurs in MM either as a homodimer (T/sub 2/) or as a heterodimer (TX) with a protein X of a similar size. These dimers (T/sub 2/ or TX) in MM, furthermore, may form a multi-molecular assembly with another, large (300-400 KDa) protein Y, and insulin increases this assembly formation. These putative, transporter-associated proteins X and Y may play an important role in control of transporter distribution between PM and MM, particularly in response to insulin.

  19. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    Science.gov (United States)

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.

    Science.gov (United States)

    Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald

    2017-04-01

    The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: 60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each

  1. Variance estimation, design effects, and sample size calculations for respondent-driven sampling.

    Science.gov (United States)

    Salganik, Matthew J

    2006-11-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling.

  2. Sample size requirement in analytical studies for similarity assessment.

    Science.gov (United States)

    Chow, Shein-Chung; Song, Fuyu; Bai, He

    2017-01-01

    For the assessment of biosimilar products, the FDA recommends a stepwise approach for obtaining the totality-of-the-evidence for assessing biosimilarity between a proposed biosimilar product and its corresponding innovative biologic product. The stepwise approach starts with analytical studies for assessing similarity in critical quality attributes (CQAs), which are relevant to clinical outcomes at various stages of the manufacturing process. For CQAs that are the most relevant to clinical outcomes, the FDA requires an equivalence test be performed for similarity assessment based on an equivalence acceptance criterion (EAC) that is obtained using a single test value of some selected reference lots. In practice, we often have extremely imbalanced numbers of reference and test lots available for the establishment of EAC. In this case, to assist the sponsors, the FDA proposed an idea for determining the number of reference lots and the number of test lots required in order not to have imbalanced sample sizes when establishing EAC for the equivalence test based on extensive simulation studies. Along this line, this article not only provides statistical justification of Dong, Tsong, and Weng's proposal, but also proposes an alternative method for sample size requirement for the Tier 1 equivalence test.

  3. Polarimetric LIDAR with FRI sampling for target characterization

    Science.gov (United States)

    Wijerathna, Erandi; Creusere, Charles D.; Voelz, David; Castorena, Juan

    2017-09-01

    Polarimetric LIDAR is a significant tool for current remote sensing applications. In addition, measurement of the full waveform of the LIDAR echo provides improved ranging and target discrimination, although, data storage volume in this approach can be problematic. In the work presented here, we investigated the practical issues related to the implementation of a full waveform LIDAR system to identify polarization characteristics of multiple targets within the footprint of the illumination beam. This work was carried out on a laboratory LIDAR testbed that features a flexible arrangement of targets and the ability to change the target polarization characteristics. Targets with different retardance characteristics were illuminated with a linearly polarized laser beam and the return pulse intensities were analyzed by rotating a linear analyzer polarizer in front of a high-speed detector. Additionally, we explored the applicability and the limitations of applying a sparse sampling approach based on Finite Rate of Innovations (FRI) to compress and recover the characteristic parameters of the pulses reflected from the targets. The pulse parameter values extracted by the FRI analysis were accurate and we successfully distinguished the polarimetric characteristics and the range of multiple targets at different depths within the same beam footprint. We also demonstrated the recovery of an unknown target retardance value from the echoes by applying a Mueller matrix system model.

  4. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    Science.gov (United States)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by

  5. Relativistic effects on galaxy redshift samples due to target selection

    Science.gov (United States)

    Alam, Shadab; Croft, Rupert A. C.; Ho, Shirley; Zhu, Hongyu; Giusarma, Elena

    2017-10-01

    In a galaxy redshift survey, the objects to be targeted for spectra are selected from a photometrically observed sample. The observed magnitudes and colours of galaxies in this parent sample will be affected by their peculiar velocities, through relativistic Doppler and relativistic beaming effects. In this paper, we compute the resulting expected changes in galaxy photometry. The magnitudes of the relativistic effects are a function of redshift, stellar mass, galaxy velocity and velocity direction. We focus on the CMASS sample from the Sloan Digital Sky Survey (SDSS) and Baryon Oscillation Spectroscopic Survey (BOSS), which is selected on the basis of colour and magnitude. We find that 0.10 per cent of the sample (∼585 galaxies) has been scattered into the targeted region of colour-magnitude space by relativistic effects, and conversely 0.09 per cent of the sample (∼532 galaxies) has been scattered out. Observational consequences of these effects include an asymmetry in clustering statistics, which we explore in a companion paper. Here, we compute a set of weights that can be used to remove the effect of modulations introduced into the density field inferred from a galaxy sample. We conclude by investigating the possible effects of these relativistic modulation on large-scale clustering of the galaxy sample.

  6. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  7. Sample Size of One: Operational Qualitative Analysis in the Classroom

    Directory of Open Access Journals (Sweden)

    John Hoven

    2015-10-01

    Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.

  8. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    /CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...

  9. Laser remote sensing of backscattered light from a target sample

    Science.gov (United States)

    Sweatt, William C [Albuquerque, NM; Williams, John D [Albuquerque, NM

    2008-02-26

    A laser remote sensing apparatus comprises a laser to provide collimated excitation light at a wavelength; a sensing optic, comprising at least one optical element having a front receiving surface to focus the received excitation light onto a back surface comprising a target sample and wherein the target sample emits a return light signal that is recollimated by the front receiving surface; a telescope for collecting the recollimated return light signal from the sensing optic; and a detector for detecting and spectrally resolving the return light signal. The back surface further can comprise a substrate that absorbs the target sample from an environment. For example the substrate can be a SERS substrate comprising a roughened metal surface. The return light signal can be a surface-enhanced Raman signal or laser-induced fluorescence signal. For fluorescence applications, the return signal can be enhanced by about 10.sup.5, solely due to recollimation of the fluorescence return signal. For SERS applications, the return signal can be enhanced by 10.sup.9 or more, due both to recollimation and to structuring of the SERS substrate so that the incident laser and Raman scattered fields are in resonance with the surface plasmons of the SERS substrate.

  10. Using scenario tree modelling for targeted herd sampling to substantiate freedom from disease.

    Science.gov (United States)

    Blickenstorfer, Sarah; Schwermer, Heinzpeter; Engels, Monika; Reist, Martin; Doherr, Marcus G; Hadorn, Daniela C

    2011-08-16

    In order to optimise the cost-effectiveness of active surveillance to substantiate freedom from disease, a new approach using targeted sampling of farms was developed and applied on the example of infectious bovine rhinotracheitis (IBR) and enzootic bovine leucosis (EBL) in Switzerland. Relevant risk factors (RF) for the introduction of IBR and EBL into Swiss cattle farms were identified and their relative risks defined based on literature review and expert opinions. A quantitative model based on the scenario tree method was subsequently used to calculate the required sample size of a targeted sampling approach (TS) for a given sensitivity. We compared the sample size with that of a stratified random sample (sRS) with regard to efficiency. The required sample sizes to substantiate disease freedom were 1,241 farms for IBR and 1,750 farms for EBL to detect 0.2% herd prevalence with 99% sensitivity. Using conventional sRS, the required sample sizes were 2,259 farms for IBR and 2,243 for EBL. Considering the additional administrative expenses required for the planning of TS, the risk-based approach was still more cost-effective than a sRS (40% reduction on the full survey costs for IBR and 8% for EBL) due to the considerable reduction in sample size. As the model depends on RF selected through literature review and was parameterised with values estimated by experts, it is subject to some degree of uncertainty. Nevertheless, this approach provides the veterinary authorities with a promising tool for future cost-effective sampling designs.

  11. Using scenario tree modelling for targeted herd sampling to substantiate freedom from disease

    Directory of Open Access Journals (Sweden)

    Reist Martin

    2011-08-01

    Full Text Available Abstract Background In order to optimise the cost-effectiveness of active surveillance to substantiate freedom from disease, a new approach using targeted sampling of farms was developed and applied on the example of infectious bovine rhinotracheitis (IBR and enzootic bovine leucosis (EBL in Switzerland. Relevant risk factors (RF for the introduction of IBR and EBL into Swiss cattle farms were identified and their relative risks defined based on literature review and expert opinions. A quantitative model based on the scenario tree method was subsequently used to calculate the required sample size of a targeted sampling approach (TS for a given sensitivity. We compared the sample size with that of a stratified random sample (sRS with regard to efficiency. Results The required sample sizes to substantiate disease freedom were 1,241 farms for IBR and 1,750 farms for EBL to detect 0.2% herd prevalence with 99% sensitivity. Using conventional sRS, the required sample sizes were 2,259 farms for IBR and 2,243 for EBL. Considering the additional administrative expenses required for the planning of TS, the risk-based approach was still more cost-effective than a sRS (40% reduction on the full survey costs for IBR and 8% for EBL due to the considerable reduction in sample size. Conclusions As the model depends on RF selected through literature review and was parameterised with values estimated by experts, it is subject to some degree of uncertainty. Nevertheless, this approach provides the veterinary authorities with a promising tool for future cost-effective sampling designs.

  12. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  13. 40 CFR 761.243 - Standard wipe sample method and size.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Standard wipe sample method and size... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas...

  14. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  15. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  16. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  17. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  18. (Sample) size matters! An examination of sample size from the SPRINT trial study to prospectively evaluate reamed intramedullary nails in patients with tibial fractures

    NARCIS (Netherlands)

    Bhandari, Mohit; Tornetta, Paul; Rampersad, Shelly-Ann; Sprague, Sheila; Heels-Ansdell, Diane; Sanders, David W.; Schemitsch, Emil H.; Swiontkowski, Marc; Walter, Stephen; Guyatt, Gordon; Buckingham, Lisa; Leece, Pamela; Viveiros, Helena; Mignott, Tashay; Ansell, Natalie; Sidorkewicz, Natalie; Agel, Julie; Bombardier, Claire; Berlin, Jesse A.; Bosse, Michael; Browner, Bruce; Gillespie, Brenda; O'Brien, Peter; Poolman, Rudolf; Macleod, Mark D.; Carey, Timothy; Leitch, Kellie; Bailey, Stuart; Gurr, Kevin; Konito, Ken; Bartha, Charlene; Low, Isolina; MacBean, Leila V.; Ramu, Mala; Reiber, Susan; Strapp, Ruth; Tieszer, Christina; Kreder, Hans; Stephen, David J. G.; Axelrod, Terry S.; Yee, Albert J. M.; Richards, Robin R.; Finkelstein, Joel; Holtby, Richard M.; Cameron, Hugh; Cameron, John; Gofton, Wade; Murnaghan, John; Schatztker, Joseph; Bulmer, Beverly; Conlan, Lisa; Laflamme, Yves; Berry, Gregory; Beaumont, Pierre; Ranger, Pierre; Laflamme, Georges-Henri; Jodoin, Alain; Renaud, Eric; Gagnon, Sylvain; Maurais, Gilles; Malo, Michel; Fernandes, Julio; Latendresse, Kim; Poirier, Marie-France; Daigneault, Gina; McKee, Michael M.; Waddell, James P.; Bogoch, Earl R.; Daniels, Timothy R.; McBroom, Robert R.; Vicente, Milena R.; Storey, Wendy; Wild, Lisa M.; McCormack, Robert; Perey, Bertrand; Goetz, Thomas J.; Pate, Graham; Penner, Murray J.; Panagiotopoulos, Kostas; Pirani, Shafique; Dommisse, Ian G.; Loomer, Richard L.; Stone, Trevor; Moon, Karyn; Zomar, Mauri; Webb, Lawrence X.; Teasdall, Robert D.; Birkedal, John Peter; Martin, David Franklin; Ruch, David S.; Kilgus, Douglas J.; Pollock, David C.; Harris, Mitchel Brion; Wiesler, Ethan Ron; Ward, William G.; Shilt, Jeffrey Scott; Koman, Andrew L.; Poehling, Gary G.; Kulp, Brenda; Creevy, William R.; Stein, Andrew B.; Bono, Christopher T.; Einhorn, Thomas A.; Brown, T. Desmond; Pacicca, Donna; Sledge, John B.; Foster, Timothy E.; Voloshin, Ilva; Bolton, Jill; Carlisle, Hope; Shaughnessy, Lisa; Ombremsky, William T.; LeCroy, C. Michael; Meinberg, Eric G.; Messer, Terry M.; Craig, William L.; Dirschl, Douglas R.; Caudle, Robert; Harris, Tim; Elhert, Kurt; Hage, William; Jones, Robert; Piedrahita, Luis; Schricker, Paul O.; Driver, Robin; Godwin, Jean; Hansley, Gloria; Obremskey, William Todd; Kregor, Philip James; Tennent, Gregory; Truchan, Lisa M.; Sciadini, Marcus; Shuler, Franklin D.; Driver, Robin E.; Nading, Mary Alice; Neiderstadt, Jacky; Vap, Alexander R.; Vallier, Heather A.; Patterson, Brendan M.; Wilber, John H.; Wilber, Roger G.; Sontich, John K.; Moore, Timothy Alan; Brady, Drew; Cooperman, Daniel R.; Davis, John A.; Cureton, Beth Ann; Mandel, Scott; Orr, R. Douglas; Sadler, John T. S.; Hussain, Tousief; Rajaratnam, Krishan; Petrisor, Bradley; Drew, Brian; Bednar, Drew A.; Kwok, Desmond C. H.; Pettit, Shirley; Hancock, Jill; Cole, Peter A.; Smith, Joel J.; Brown, Gregory A.; Lange, Thomas A.; Stark, John G.; Levy, Bruce; Swiontkowski, Marc F.; Garaghty, Mary J.; Salzman, Joshua G.; Schutte, Carol A.; Tastad, Linda Toddie; Vang, Sandy; Seligson, David; Roberts, Craig S.; Malkani, Arthur L.; Sanders, Laura; Gregory, Sharon Allen; Dyer, Carmen; Heinsen, Jessica; Smith, Langan; Madanagopal, Sudhakar; Coupe, Kevin J.; Tucker, Jeffrey J.; Criswell, Allen R.; Buckle, Rosemary; Rechter, Alan Jeffrey; Sheth, Dhiren Shaskikant; Urquart, Brad; Trotscher, Thea; Anders, Mark J.; Kowalski, Joseph M.; Fineberg, Marc S.; Bone, Lawrence B.; Phillips, Matthew J.; Rohrbacher, Bernard; Stegemann, Philip; Mihalko, William M.; Buyea, Cathy; Augustine, Stephen J.; Jackson, William Thomas; Solis, Gregory; Ero, Sunday U.; Segina, Daniel N.; Berrey, Hudson B.; Agnew, Samuel G.; Fitzpatrick, Michael; Campbell, Lakina C.; Derting, Lynn; McAdams, June; Goslings, J. Carel; Ponsen, Kees Jan; Luitse, Jan; Kloen, Peter; Joosse, Pieter; Winkelhagen, Jasper; Duivenvoorden, Raphaël; Teague, David C.; Davey, Joseph; Sullivan, J. Andy; Ertl, William J. J.; Puckett, Timothy A.; Pasque, Charles B.; Tompkins, John F.; Gruel, Curtis R.; Kammerlocher, Paul; Lehman, Thomas P.; Puffinbarger, William R.; Carl, Kathy L.; Weber, Donald W.; Jomha, Nadr M.; Goplen, Gordon R.; Masson, Edward; Beaupre, Lauren A.; Greaves, Karen E.; Schaump, Lori N.; Jeray, Kyle J.; Goetz, David R.; Westberry, Davd E.; Broderick, J. Scott; Moon, Bryan S.; Tanner, Stephanie L.; Powell, James N.; Buckley, Richard E.; Elves, Leslie; Connolly, Stephen; Abraham, Edward P.; Eastwood, Donna; Steele, Trudy; Ellis, Thomas; Herzberg, Alex; Brown, George A.; Crawford, Dennis E.; Hart, Robert; Hayden, James; Orfaly, Robert M.; Vigland, Theodore; Vivekaraj, Maharani; Bundy, Gina L.; Miclau, Theodore; Matityahu, Amir; Coughlin, R. Richard; Kandemir, Utku; McClellan, R. Trigg; Lin, Cindy Hsin-Hua; Karges, David; Cramer, Kathryn; Watson, J. Tracy; Moed, Berton; Scott, Barbara; Beck, Dennis J.; Orth, Carolyn; Puskas, David; Clark, Russell; Jones, Jennifer; Egol, Kenneth A.; Paksima, Nader; France, Monet; Wai, Eugene K.; Johnson, Garth; Wilkinson, Ross; Gruszczynski, Adam T.; Vexler, Liisa

    2013-01-01

    Inadequate sample size and power in randomized trials can result in misleading findings. This study demonstrates the effect of sample size in a large clinical trial by evaluating the results of the Study to Prospectively evaluate Reamed Intramedullary Nails in Patients with Tibial fractures (SPRINT)

  19. Size variation in samples of fossil and recent murid teeth

    NARCIS (Netherlands)

    Freudenthal, M.; Martín Suárez, E.

    1990-01-01

    The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.

  20. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    Science.gov (United States)

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  1. Clipped speckle autocorrelation metric for spot size characterization of focused beam on a diffuse target.

    Science.gov (United States)

    Li, Yuanyang; Guo, Jin; Liu, Lisheng; Wang, Tingfeng; Tang, Wei; Jiang, Zhenhua

    2015-03-23

    The clipped speckle autocorrelation (CSA) metric is proposed for estimating the laser beam energy concentration on a remote diffuse target in a laser beam projection system with feedback information. Using the second order statistics of the intensity distribution of the fully developed speckle and the relation of the autocorrelation functions for the clipped and unclipped speckles, we present the theoretical expression of this metric as a function of the normalized CSA function. The simulation technique based on the equivalence of the spatial average and the ensemble time average is provided. Based on this simulation technique, we analyze the influence of the surface roughness of the target on this metric and then show the influencing factors of the metric performance, for example the finite sample effect and aperture size of the observation system. Experimental results are illustrated to examine the capability of this metric and the correctness of the discussion about the metric performance.

  2. The effects of fixation target size and luminance on microsaccades and square-wave jerks

    Directory of Open Access Journals (Sweden)

    Michael B. McCamy

    2013-02-01

    Full Text Available A large amount of classic and contemporary vision studies require subjects to fixate a target. Target fixation serves as a normalizing factor across studies, promoting the field’s ability to compare and contrast experiments. Yet, fixation target parameters, including luminance, contrast, size, shape and color, vary across studies, potentially affecting the interpretation of results. Previous research on the effects of fixation target size and luminance on the control of fixation position rendered conflicting results, and no study has examined the effects of fixation target characteristics on square-wave jerks, the most common type of saccadic intrusion. Here we set out to determine the effects of fixation target size and luminance on the characteristics of microsaccades and square-wave jerks, over a large range of stimulus parameters. Human subjects fixated a circular target with varying luminance and size while we recorded their eye movements with an infrared video tracker (EyeLink 1000, SR Research. We detected microsaccades and SWJs automatically with objective algorithms developed previously. Microsaccade rates decreased linearly and microsaccade magnitudes increased linearly with target size. The percent of microsaccades forming part of SWJs decreased, and the time from the end of the initial SWJ saccade to the beginning of the second SWJ saccade (SWJ inter-saccadic interval; ISI increased with target size. The microsaccadic preference for horizontal direction also decreased moderately with target size . Target luminance did not affect significantly microsaccades or SWJs, however. In the absence of a fixation target, microsaccades became scarcer and larger, while SWJ prevalence decreased and SWJ ISIs increased. Thus, the choice of fixation target can affect experimental outcomes, especially in human factors and in visual and oculomotor studies. These results have implications for previous and future research conducted under fixation

  3. The effects of fixation target size and luminance on microsaccades and square-wave jerks.

    Science.gov (United States)

    McCamy, Michael B; Najafian Jazi, Ali; Otero-Millan, Jorge; Macknik, Stephen L; Martinez-Conde, Susana

    2013-01-01

    A large amount of classic and contemporary vision studies require subjects to fixate a target. Target fixation serves as a normalizing factor across studies, promoting the field's ability to compare and contrast experiments. Yet, fixation target parameters, including luminance, contrast, size, shape and color, vary across studies, potentially affecting the interpretation of results. Previous research on the effects of fixation target size and luminance on the control of fixation position rendered conflicting results, and no study has examined the effects of fixation target characteristics on square-wave jerks, the most common type of saccadic intrusion. Here we set out to determine the effects of fixation target size and luminance on the characteristics of microsaccades and square-wave jerks, over a large range of stimulus parameters. Human subjects fixated a circular target with varying luminance and size while we recorded their eye movements with an infrared video tracker (EyeLink 1000, SR Research). We detected microsaccades and SWJs automatically with objective algorithms developed previously. Microsaccade rates decreased linearly and microsaccade magnitudes increased linearly with target size. The percent of microsaccades forming part of SWJs decreased, and the time from the end of the initial SWJ saccade to the beginning of the second SWJ saccade (SWJ inter-saccadic interval; ISI) increased with target size. The microsaccadic preference for horizontal direction also decreased moderately with target size . Target luminance did not affect significantly microsaccades or SWJs, however. In the absence of a fixation target, microsaccades became scarcer and larger, while SWJ prevalence decreased and SWJ ISIs increased. Thus, the choice of fixation target can affect experimental outcomes, especially in human factors and in visual and oculomotor studies. These results have implications for previous and future research conducted under fixation conditions, and should

  4. Sample size reduction in groundwater surveys via sparse data assimilation

    KAUST Repository

    Hussain, Z.

    2013-04-01

    In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.

  5. Practical Approaches For Determination Of Sample Size In Paired Case-Control Studies

    OpenAIRE

    Demirel, Neslihan; Ozlem EGE ORUC; Gurler, Selma

    2016-01-01

    Objective: Cross-over design or paired case control studies that are using in clinical studies are the methods of design of experiments which requires dependent samples. The problem of sample size determination is generally difficult step of planning the statistical design. The aim of this study is to provide the researchers a practical approach for determining the sample size in paired control studies. Material and Methods: In this study, determination of sample size is mentioned in detail i...

  6. Impact of metric and sample size on determining malaria hotspot boundaries.

    Science.gov (United States)

    Stresman, Gillian H; Giorgi, Emanuele; Baidjoe, Amrish; Knight, Phil; Odongo, Wycliffe; Owaga, Chrispin; Shagari, Shehu; Makori, Euniah; Stevenson, Jennifer; Drakeley, Chris; Cox, Jonathan; Bousema, Teun; Diggle, Peter J

    2017-04-12

    The spatial heterogeneity of malaria suggests that interventions may be targeted for maximum impact. It is unclear to what extent different metrics lead to consistent delineation of hotspot boundaries. Using data from a large community-based malaria survey in the western Kenyan highlands, we assessed the agreement between a model-based geostatistical (MBG) approach to detect hotspots using Plasmodium falciparum parasite prevalence and serological evidence for exposure. Malaria transmission was widespread and highly heterogeneous with one third of the total population living in hotspots regardless of metric tested. Moderate agreement (Kappa = 0.424) was observed between hotspots defined based on parasite prevalence by polymerase chain reaction (PCR)- and the prevalence of antibodies to two P. falciparum antigens (MSP-1, AMA-1). While numerous biologically plausible hotspots were identified, their detection strongly relied on the proportion of the population sampled. When only 3% of the population was sampled, no PCR derived hotspots were reliably detected and at least 21% of the population was needed for reliable results. Similar results were observed for hotspots of seroprevalence. Hotspot boundaries are driven by the malaria diagnostic and sample size used to inform the model. These findings warn against the simplistic use of spatial analysis on available data to target malaria interventions in areas where hotspot boundaries are uncertain.

  7. Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !

    NARCIS (Netherlands)

    van Breukelen, Gerard J.P.; Candel, Math J.J.M.

    2012-01-01

    Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given

  8. Final work plan for targeted sampling at Webber, Kansas.

    Energy Technology Data Exchange (ETDEWEB)

    LaFreniere, L. M.; Environmental Science Division

    2006-05-01

    This Work Plan outlines the scope of work for targeted sampling at Webber, Kansas (Figure 1.1). This activity is being conducted at the request of the Kansas Department of Health and Environment (KDHE), in accordance with Section V of the Intergovernmental Agreement between the KDHE and the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA). Data obtained in this sampling event will be used to (1) evaluate the current status of previously detected contamination at Webber and (2) determine whether the site requires further action. This work is being performed on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research center operated by the University of Chicago for the U.S. Department of Energy (DOE). The CCC/USDA has entered into an interagency agreement with DOE, under which Argonne provides technical assistance to the CCC/USDA with environmental site characterization and remediation at its former grain storage facilities. Argonne has issued a Master Work Plan (Argonne 2002) that describes the general scope of and guidance for all investigations at former CCC/USDA facilities in Kansas. The Master Work Plan, approved by the KDHE, contains the materials common to investigations at all locations in Kansas. This document should be consulted for complete details of the technical activities proposed at the former CCC/USDA facility in Webber.

  9. How many is enough? Determining optimal sample sizes for normative studies in pediatric neuropsychology.

    Science.gov (United States)

    Bridges, Ana J; Holler, Karen A

    2007-11-01

    The purpose of this investigation was to determine how confidence intervals (CIs) for pediatric neuropsychological norms vary as a function of sample size, and to determine optimal sample sizes for normative studies. First, the authors calculated 95% CIs for a set of published pediatric norms for four commonly used neuropsychological instruments. Second, 95% CIs were calculated for varying sample size (from n = 5 to n = 500). Results suggest that some pediatric norms have unacceptably wide CIs, and normative studies ought optimally to use 50 to 75 participants per cell. Smaller sample sizes may lead to overpathologizing results, while the cost of obtaining larger samples may not be justifiable.

  10. AUV mapping and targeted ROV sampling on the Alarcon Rise

    Science.gov (United States)

    Clague, D. A.; Caress, D. W.; Dreyer, B. M.; Paduan, J. B.; Lundsten, L.; Bowles, J.; Castillo, P.; France, R. G.; Portner, R. A.; Spelz, R. M.; Zierenberg, R. A.

    2015-12-01

    Alarcon Rise, the northernmost bare-rock East Pacific Rise segment, and its intersections with the adjacent Pescadero and Tamayo Transforms were mapped at 10-cm vertical and 1-m lateral resolution using an AUV. The ~50 km long ridge segment is the first completely mapped at such high-resolution. Using the AUV base maps, targeted sampling during 15 ROV dives and 29 wax-tip cores in 2012 and 2015 recovered 322 precisely located glassy lavas. Melts are most primitive (MgO>8.5%) near the shallowest part of the segment where a chemically heterogeneous (7.96-8.73% MgO) sheet flow erupted from a 9 km-long fissure, ~1/3 from the southwest end. Four active black-smoker hydrothermal fields were discovered here using the maps. Inactive fields occur to the north. Lavas are N- to T-MORB with an off-axis E-MORB near mid-segment. A cross-axis transect to the northwest shows similar lava chemistry to 3.4 km off-axis. Extensive flows with glass MgO of 5.3-7.3% and 6.7-7.5% occur on the Tamayo and Pescadero Transforms, respectively. The flows have variable sediment cover indicating a wide age range. The transforms have sediment domes likely uplifted and deformed by sills. Most domes are surrounded by younger lava flows. Magmatic activity on the transforms indicates they are transtensional. An unusually rough faulted terrain on the northeastern end of the segment occurs near the northwestern edge of the neovolcanic zone. Sampling during 5 ROV dives recovered 110 glassy samples of rhyolite, dacite, andesite, and basaltic andesite from the 8 km-long fault-bounded region with basalt recovered along the southeastern part of the zone. The samples form an almost continuous sequence with glass compositions from 50-77% SiO2 and 8-0.05% MgO and include the first rhyolitic lavas from the global submarine ridge system. Isotopic data preclude significant continental crustal contamination, indicating the evolved rocks formed by fractional crystallization and magma mixing of mantle derived melts.

  11. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    OpenAIRE

    Wolf, Erika J.; Harrington, Kelly M.; Shaunna L Clark; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb. This study used Monte Carlo data simulation techniques to evaluate sample size requirements for common applied SEMs. Across a series of simulations, we...

  12. Bayesian sample size determination for a clinical trial with correlated continuous and binary outcomes.

    Science.gov (United States)

    Stamey, James D; Natanegara, Fanni; Seaman, John W

    2013-01-01

    In clinical trials, multiple outcomes are often collected in order to simultaneously assess effectiveness and safety. We develop a Bayesian procedure for determining the required sample size in a regression model where a continuous efficacy variable and a binary safety variable are observed. The sample size determination procedure is simulation based. The model accounts for correlation between the two variables. Through examples we demonstrate that savings in total sample size are possible when the correlation between these two variables is sufficiently high.

  13. Power and sample size determination for measures of environmental impact in aquatic systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammann, L.P. [Univ. of Texas, Richardson, TX (United States); Dickson, K.L.; Waller, W.T.; Kennedy, J.H. [Univ. of North Texas, Denton, TX (United States); Mayer, F.L.; Lewis, M. [Environmental Protection Agency, Gulf Breeze, FL (United States)

    1994-12-31

    To effectively monitor the status of various freshwater and estuarine ecological systems, it is necessary to understand the statistical power associated with the measures of ecological health that are appropriate for each system. These power functions can then be used to determine sample sizes that are required to attain targeted change detection likelihoods. A number of different measures have been proposed and are used for such monitoring. these include diversity and evenness indices, richness, and organisms counts. Power functions can be estimated when preliminary or historical data are available for the region and system of interest. Unfortunately, there are a number of problems associated with the computation of power functions and sample sizes for these measures. These problems include the presence of outliers, co-linearity among the variables, and non-normality of count data. The problems, and appropriate methods to compute the power functions, for each of the commonly employed measures of ecological health will be discussed. In addition, the relationship between power and the level of taxonomic classification used to compute the measures of diversity, evenness, richness, and organism counts will be discussed. Methods for computation of the power functions will be illustrated using data sets from previous EPA studies.

  14. Issues of sample size in sensitivity and specificity analysis with special reference to oncology

    Directory of Open Access Journals (Sweden)

    Atul Juneja

    2015-01-01

    Full Text Available Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.

  15. Issues of sample size in sensitivity and specificity analysis with special reference to oncology.

    Science.gov (United States)

    Juneja, Atul; Sharma, Shashi

    2015-01-01

    Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.

  16. Sample Size for Measuring Grammaticality in Preschool Children from Picture-Elicited Language Samples

    Science.gov (United States)

    Eisenberg, Sarita L.; Guo, Ling-Yu

    2015-01-01

    Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…

  17. Randomized controlled trials 5: Determining the sample size and power for clinical trials and cohort studies.

    Science.gov (United States)

    Greene, Tom

    2015-01-01

    Performing well-powered randomized controlled trials is of fundamental importance in clinical research. The goal of sample size calculations is to assure that statistical power is acceptable while maintaining a small probability of a type I error. This chapter overviews the fundamentals of sample size calculation for standard types of outcomes for two-group studies. It considers (1) the problems of determining the size of the treatment effect that the studies will be designed to detect, (2) the modifications to sample size calculations to account for loss to follow-up and nonadherence, (3) the options when initial calculations indicate that the feasible sample size is insufficient to provide adequate power, and (4) the implication of using multiple primary endpoints. Sample size estimates for longitudinal cohort studies must take account of confounding by baseline factors.

  18. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  19. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Science.gov (United States)

    2010-10-01

    ... applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than... Using Finite Population Correction The FPC is not applied when the sample is drawn from a population of... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up...

  20. Small, medium, large or supersize? The development and evaluation of interventions targeted at portion size

    NARCIS (Netherlands)

    Vermeer, W.M.; Steenhuis, I.H.M.; Poelman, M.P.

    2014-01-01

    In the past decades, portion sizes of high-caloric foods and drinks have increased and can be considered an important environmental obesogenic factor. This paper describes a research project in which the feasibility and effectiveness of environmental interventions targeted at portion size was

  1. Implications of sampling design and sample size for national carbon accounting systems.

    Science.gov (United States)

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  2. Post-stratified estimation: with-in strata and total sample size recommendations

    Science.gov (United States)

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  3. Sample size calculations for pilot randomized trials: a confidence interval approach.

    Science.gov (United States)

    Cocks, Kim; Torgerson, David J

    2013-02-01

    To describe a method using confidence intervals (CIs) to estimate the sample size for a pilot randomized trial. Using one-sided CIs and the estimated effect size that would be sought in a large trial, we calculated the sample size needed for pilot trials. Using an 80% one-sided CI, we estimated that a pilot trial should have at least 9% of the sample size of the main planned trial. Using the estimated effect size difference for the main trial and using a one-sided CI, this allows us to calculate a sample size for a pilot trial, which will make its results more useful than at present. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Xu Huijun; Gordon, J. James; Siebers, Jeffrey V. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2011-02-15

    Purpose: A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D{sub v} exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Methods: Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals {omega} (e.g., {omega}=1 deg., 2 deg., 5 deg., 10 deg., 20 deg.). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment {omega}{sub eff}. In each direction, the DM was calculated by moving the structure in radial steps of size {delta}(=0.1,0.2,0.5,1 mm) until the specified isodose was crossed. Coverage estimation accuracy {Delta}Q was quantified as a function of the sampling parameters {omega} or

  5. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    Interestingly, the dislocation plasticity of the single- crystal AlN strongly depends on specimen sizes. As shown in Fig. 5a and b, the large plastic...ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to-Ductile Transition of Single- Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to-Ductile Transition of Single- Crystal

  6. Not too big, not too small: a goldilocks approach to sample size selection.

    Science.gov (United States)

    Broglio, Kristine R; Connor, Jason T; Berry, Scott M

    2014-01-01

    We present a Bayesian adaptive design for a confirmatory trial to select a trial's sample size based on accumulating data. During accrual, frequent sample size selection analyses are made and predictive probabilities are used to determine whether the current sample size is sufficient or whether continuing accrual would be futile. The algorithm explicitly accounts for complete follow-up of all patients before the primary analysis is conducted. We refer to this as a Goldilocks trial design, as it is constantly asking the question, "Is the sample size too big, too small, or just right?" We describe the adaptive sample size algorithm, describe how the design parameters should be chosen, and show examples for dichotomous and time-to-event endpoints.

  7. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    Science.gov (United States)

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  8. Mechanistic Study on Size Exclusion of NOM onto Porous TiO₂ for Target Contaminants Decomposition.

    Science.gov (United States)

    Zakersalehi, Abolfazl; Zamankhan, Hesam; Nadagouda, Mallikarjuna; Choi, Hyeok

    2017-11-01

      Nonselective oxidation of organic chemicals during TiO2 photocatalytic water treatment significantly prohibits decomposition of toxic target contaminants, particularly in the presence of naturally abundant less toxic natural organic matter (NOM). To minimize the adverse effect of NOM, the authors have investigated physical size exclusion of large NOM onto mesoporous TiO2 photocatalysts, which allows small target contaminants to selectively access the porous structure for subsequent chemical reaction. Various treatment scenarios tested with different targets (ibuprofen, microcystin-LR) and competitors (humic acid, polyethylene glycol), and a series of mesoporous TiO2 materials proved the size exclusion mechanism. Discussion was made on the impact of the porous structure of TiO2 on selectivity and reactivity, considering size difference among targets NOM.

  9. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  10. Automated Gel Size Selection to Improve the Quality of Next-generation Sequencing Libraries Prepared from Environmental Water Samples.

    Science.gov (United States)

    Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick

    2015-04-17

    Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.

  11. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  12. Determining sample size and a passing criterion for respirator fit-test panels.

    Science.gov (United States)

    Landsittel, D; Zhuang, Z; Newcomb, W; Berry Ann, R

    2014-01-01

    Few studies have proposed methods for sample size determination and specification of passing criterion (e.g., number needed to pass from a given size panel) for respirator fit-tests. One approach is to account for between- and within- subject variability, and thus take full advantage of the multiple donning measurements within subject, using a random effects model. The corresponding sample size calculation, however, may be difficult to implement in practice, as it depends on the model-specific and test panel-specific variance estimates, and thus does not yield a single sample size or specific cutoff for number needed to pass. A simple binomial approach is therefore proposed to simultaneously determine both the required sample size and the optimal cutoff for the number of subjects needed to achieve a passing result. The method essentially conducts a global search of the type I and type II errors under different null and alternative hypotheses, across the range of possible sample sizes, to find the lowest sample size which yields at least one cutoff satisfying, or approximately satisfying all pre-determined limits for the different error rates. Benchmark testing of 98 respirators (conducted by the National Institute for Occupational Safety and Health) is used to illustrate the binomial approach and show how sample size estimates from the random effects model can vary substantially depending on estimated variance components. For the binomial approach, probability calculations show that a sample size of 35 to 40 yields acceptable error rates under different null and alternative hypotheses. For the random effects model, the required sample sizes are generally smaller, but can vary substantially based on the estimate variance components. Overall, despite some limitations, the binomial approach represents a highly practical approach with reasonable statistical properties.

  13. Sample size and power calculations based on generalized linear mixed models with correlated binary outcomes.

    Science.gov (United States)

    Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R

    2008-08-01

    The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.

  14. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    Science.gov (United States)

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  15. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  16. New shooting algorithms for transition path sampling: centering moves and varied-perturbation sizes for improved sampling.

    Science.gov (United States)

    Rowley, Christopher N; Woo, Tom K

    2009-12-21

    Transition path sampling has been established as a powerful tool for studying the dynamics of rare events. The trajectory generation moves of this Monte Carlo procedure, shooting moves and shifting modes, were developed primarily for rate constant calculations, although this method has been more extensively used to study the dynamics of reactive processes. We have devised and implemented three alternative trajectory generation moves for use with transition path sampling. The centering-shooting move incorporates a shifting move into a shooting move, which centers the transition period in the middle of the trajectory, eliminating the need for shifting moves and generating an ensemble where the transition event consistently occurs near the middle of the trajectory. We have also developed varied-perturbation size shooting moves, wherein smaller perturbations are made if the shooting point is far from the transition event. The trajectories generated using these moves decorrelate significantly faster than with conventional, constant sized perturbations. This results in an increase in the statistical efficiency by a factor of 2.5-5 when compared to the conventional shooting algorithm. On the other hand, the new algorithm breaks detailed balance and introduces a small bias in the transition time distribution. We have developed a modification of this varied-perturbation size shooting algorithm that preserves detailed balance, albeit at the cost of decreased sampling efficiency. Both varied-perturbation size shooting algorithms are found to have improved sampling efficiency when compared to the original constant perturbation size shooting algorithm.

  17. Empirically determining the sample size for large-scale gene network inference algorithms.

    Science.gov (United States)

    Altay, G

    2012-04-01

    The performance of genome-wide gene regulatory network inference algorithms depends on the sample size. It is generally considered that the larger the sample size, the better the gene network inference performance. Nevertheless, there is not adequate information on determining the sample size for optimal performance. In this study, the author systematically demonstrates the effect of sample size on information-theory-based gene network inference algorithms with an ensemble approach. The empirical results showed that the inference performances of the considered algorithms tend to converge after a particular sample size region. As a specific example, the sample size region around ≃64 is sufficient to obtain the most of the inference performance with respect to precision using the representative algorithm C3NET on the synthetic steady-state data sets of Escherichia coli and also time-series data set of a homo sapiens subnetworks. The author verified the convergence result on a large, real data set of E. coli as well. The results give evidence to biologists to better design experiments to infer gene networks. Further, the effect of cutoff on inference performances over various sample sizes is considered. [Includes supplementary material].

  18. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  19. Reduced Sampling Size with Nanopipette for Tapping-Mode Scanning Probe Electrospray Ionization Mass Spectrometry Imaging.

    Science.gov (United States)

    Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi

    2016-01-01

    Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated.

  20. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  1. A behavioural Bayes approach to the determination of sample size for clinical trials considering efficacy and safety: imbalanced sample size in treatment groups.

    Science.gov (United States)

    Kikuchi, Takashi; Gittins, John

    2011-08-01

    The behavioural Bayes approach to sample size determination for clinical trials assumes that the number of subsequent patients switching to a new drug from the current drug depends on the strength of the evidence for efficacy and safety that was observed in the clinical trials. The optimal sample size is the one which maximises the expected net benefit of the trial. The approach has been developed in a series of papers by Pezeshk and the present authors (Gittins JC, Pezeshk H. A behavioral Bayes method for determining the size of a clinical trial. Drug Information Journal 2000; 34: 355-63; Gittins JC, Pezeshk H. How Large should a clinical trial be? The Statistician 2000; 49(2): 177-87; Gittins JC, Pezeshk H. A decision theoretic approach to sample size determination in clinical trials. Journal of Biopharmaceutical Statistics 2002; 12(4): 535-51; Gittins JC, Pezeshk H. A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 2002; 36: 143-50; Kikuchi T, Pezeshk H, Gittins J. A Bayesian cost-benefit approach to the determination of sample size in clinical trials. Statistics in Medicine 2008; 27(1): 68-82; Kikuchi T, Gittins J. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety. Statistics in Medicine 2009; 28(18): 2293-306; Kikuchi T, Gittins J. A Bayesian procedure for cost-benefit evaluation of a new drug in multi-national clinical trials. Statistics in Medicine 2009 (Submitted)). The purpose of this article is to provide a rationale for experimental designs which allocate more patients to the new treatment than to the control group. The model uses a logistic weight function, including an interaction term linking efficacy and safety, which determines the number of patients choosing the new drug, and hence the resulting benefit. A Monte Carlo simulation is employed for the calculation. Having a larger group of patients on the new drug in general

  2. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2017-10-03

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H0 : ES = 0 versus alternative hypotheses H1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  3. New method to estimate the sample size for calculation of a proportion assuming binomial distribution.

    Science.gov (United States)

    Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio

    2013-10-01

    Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

    Directory of Open Access Journals (Sweden)

    Wei Lin Teoh

    Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.

  5. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Sample size calculations in clinical research should also be based on ethical principles.

    Science.gov (United States)

    Cesana, Bruno Mario; Antonelli, Paolo

    2016-03-18

    Sample size calculations based on too narrow a width, or with lower and upper confidence limits bounded by fixed cut-off points, not only increase power-based sample sizes to ethically unacceptable levels (thus making research practically unfeasible) but also greatly increase the costs and burdens of clinical trials. We propose an alternative method of combining the power of a statistical test and the probability of obtaining adequate precision (the power of the confidence interval) with an acceptable increase in power-based sample sizes.

  7. An Update on Using the Range to Estimate σ When Determining Sample Sizes.

    Science.gov (United States)

    Rhiel, George Steven; Markowski, Edward

    2017-04-01

    In this research, we develop a strategy for using a range estimator of σ when determining a sample size for estimating a mean. Previous research by Rhiel is extended to provide dn values for use in calculating a range estimate of σ when working with sampling frames up to size 1,000,000. This allows the use of the range estimator of σ with "big data." A strategy is presented for using the range estimator of σ for determining sample sizes based on the dn values developed in this study.

  8. Contamination on AMS Sample Targets by Modern Carbon is Inevitable

    NARCIS (Netherlands)

    Paul, Dipayan; Been, Henk A.; Aerts-Bijma, Anita Th.; Meijer, Harro A.J.

    Accelerator mass spectrometry (AMS) measurements of the radiocarbon content in very old samples are often challenging and carry large relative uncertainties due to possible contaminations acquired during the preparation and storage steps. In case of such old samples, the natural surrounding levels

  9. Smaller Fixation Target Size Is Associated with More Stable Fixation and Less Variance in Threshold Sensitivity.

    Directory of Open Access Journals (Sweden)

    Kazunori Hirasawa

    Full Text Available The aims of this randomized observational case control study were to quantify fixation behavior during standard automated perimetry (SAP with different fixation targets and to evaluate the relationship between fixation behavior and threshold variability at each test point in healthy young participants experienced with perimetry. SAP was performed on the right eyes of 29 participants using the Octopus 900 perimeter, program 32, dynamic strategy. The fixation targets of Point, Cross, and Ring were used for SAP. Fixation behavior was recorded using a wearable eye-tracking glass. All participants underwent SAP twice with each fixation target in a random fashion. Fixation behavior was quantified by calculating the bivariate contour ellipse area (BCEA and the frequency of deviation from the fixation target. The BCEAs (deg2 of Point, Cross, and Ring targets were 1.11, 1.46, and 2.02, respectively. In all cases, BCEA increased significantly with increasing fixation target size (p < 0.05. The logarithmic value of BCEA demonstrated the same tendency (p < 0.05. A positive correlation was identified between fixation behavior and threshold variability for the Point and Cross targets (ρ = 0.413-0.534, p < 0.05. Fixation behavior increased with increasing fixation target size. Moreover, a larger fixation behavior tended to be associated with a higher threshold variability. A small fixation target is recommended during the visual field test.

  10. Publishing nutrition research: a review of sampling, sample size, statistical analysis, and other key elements of manuscript preparation, Part 2.

    Science.gov (United States)

    Boushey, Carol J; Harris, Jeffrey; Bruemmer, Barbara; Archer, Sujata L

    2008-04-01

    Members of the Board of Editors recognize the importance of providing a resource for researchers to insure quality and accuracy of reporting in the Journal. This second monograph of a periodic series focuses on study sample selection, sample size, and common statistical procedures using parametric methods, and the presentation of statistical methods and results. Attention to sample selection and sample size is critical to avoid study bias. When outcome variables adhere to a normal distribution, then parametric procedures can be used for statistical inference. Documentation that clearly outlines the steps used in the research process will advance the science of evidence-based practice in nutrition and dietetics. Real examples from problem sets and published literature are provided, as well as reference to books and online resources.

  11. A multi-cyclone sampling array for the collection of size-segregated occupational aerosols.

    Science.gov (United States)

    Mischler, Steven E; Cauda, Emanuele G; Di Giuseppe, Michelangelo; Ortiz, Luis A

    2013-01-01

    In this study a serial multi-cyclone sampling array capable of simultaneously sampling particles of multiple size fractions, from an occupational environment, for use in in vivo and in vitro toxicity studies and physical/chemical characterization, was developed and tested. This method is an improvement over current methods used to size-segregate occupational aerosols for characterization, due to its simplicity and its ability to collect sufficient masses of nano- and ultrafine sized particles for analysis. This method was evaluated in a chamber providing a uniform atmosphere of dust concentrations using crystalline silica particles. The multi-cyclone sampling array was used to segregate crystalline silica particles into four size fractions, from a chamber concentration of 10 mg/m(3). The size distributions of the particles collected at each stage were confirmed, in the air, before and after each cyclone stage. Once collected, the particle size distribution of each size fraction was measured using light scattering techniques to further confirm the size distributions. As a final confirmation, scanning electron microscopy was used to collect images of each size fraction. The results presented here, using multiple measurement techniques, show that this multi-cyclone system was able to successfully collect distinct size-segregated particles at sufficient masses to perform toxicological evaluations and physical/chemical characterization.

  12. Mineralogical, optical, geochemical, and particle size properties of four sediment samples for optical physics research

    Science.gov (United States)

    Bice, K.; Clement, S. C.

    1981-01-01

    X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.

  13. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology.

    Science.gov (United States)

    Brown, Caleb Marshall; Vavrek, Matthew J

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes.

  14. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  15. Size dependence of the disruption threshold: laboratory examination of millimeter-centimeter porous targets

    Science.gov (United States)

    Nakamura, Akiko M.; Yamane, Fumiya; Okamoto, Takaya; Takasawa, Susumu

    2015-03-01

    The outcome of collision between small solid bodies is characterized by the threshold energy density Q*s, the specific energy to shatter, that is defined as the ratio of projectile kinetic energy to the target mass (or the sum of target and projectile) needed to produce the largest intact fragment that contains one half the target mass. It is indicated theoretically and by numerical simulations that the disruption threshold Q*s decreases with target size in strength-dominated regime. The tendency was confirmed by laboratory impact experiments using non-porous rock targets (Housen and Holsapple, 1999; Nagaoka et al., 2014). In this study, we performed low-velocity impact disruption experiments on porous gypsum targets with porosity of 65-69% and of three different sizes to examine the size dependence of the disruption threshold for porous material. The gypsum specimens were shown to have a weaker volume dependence on static tensile strength than do the non-porous rocks. The disruption threshold had also a weaker dependence on size scale as Q*s ∝D-γ , γ ≤ 0.25 - 0.26, while the previous laboratory studies showed γ=0.40 for the non-porous rocks. The measurements at low-velocity lead to a value of about 100 J kg-1 for Q*s which is roughly one order of magnitude lower than the value of Q*s for the gypsum targets of 65% porosity but impacted by projectiles with higher velocities. Such a clear dependence on the impact velocity was also shown by previous studies of gypsum targets with porosity of 50%.

  16. Sample Size Determination in a Chi-Squared Test Given Information from an Earlier Study.

    Science.gov (United States)

    Gillett, Raphael

    1996-01-01

    A rigorous method is outlined for using information from a previous study and explicitly taking into account the variability of an effect size estimate when determining sample size for a chi-squared test. This approach assures that the average power of all experiments in a discipline attains the desired level. (SLD)

  17. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

    Science.gov (United States)

    Schoeneberger, Jason A.

    2016-01-01

    The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

  18. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    ... in eight plant communities in the Nylsvley Nature Reserve. Illustrates with a table. Keywords: Botanical surveys; Grass density; Grasslands; Mixed Bushveld; Nylsvley Nature Reserve; Quadrat size species density; Small-quadrat method; Species density; Species richness; botany; sample size; method; survey; south africa

  19. Sample size calculations in clinical research should also be based on ethical principles

    OpenAIRE

    Cesana, Bruno Mario; Antonelli, Paolo

    2016-01-01

    Sample size calculations based on too narrow a width, or with lower and upper confidence limits bounded by fixed cut-off points, not only increase power-based sample sizes to ethically unacceptable levels (thus making research practically unfeasible) but also greatly increase the costs and burdens of clinical trials. We propose an alternative method of combining the power of a statistical test and the probability of obtaining adequate precision (the power of the confidence interval) with an a...

  20. OPTIMAL SAMPLE SIZE FOR STATISTICAL ANALYSIS OF WINTER WHEAT QUANTITATIVE TRAITS

    OpenAIRE

    Andrijana Eđed; Dražen Horvat; Zdenko Lončarić

    2009-01-01

    In the planning phase of every research particular attention should be dedicated to estimation of optimal sample size, aiming to obtain more precise and objective results of statistical analysis. The aim of this paper was to estimate optimal sample size of wheat yield components (plant height, spike length, number of spikelets per spike, number of grains per spike, weight of grains per spike and 1000 grains weight) for determination of statistically significant differences between two treatme...

  1. Evaluation of different sized blood sampling tubes for thromboelastometry, platelet function, and platelet count

    DEFF Research Database (Denmark)

    Andreasen, Jo Bønding; Pistor-Riebold, Thea Unger; Knudsen, Ingrid Hell

    2014-01-01

    Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses and compa......Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses...

  2. Sample size for equivalence trials: a case study from a vaccine lot consistency trial.

    Science.gov (United States)

    Ganju, Jitendra; Izu, Allen; Anemona, Alessandra

    2008-08-30

    For some trials, simple but subtle assumptions can have a profound impact on the size of the trial. A case in point is a vaccine lot consistency (or equivalence) trial. Standard sample size formulas used for designing lot consistency trials rely on only one component of variation, namely, the variation in antibody titers within lots. The other component, the variation in the means of titers between lots, is assumed to be equal to zero. In reality, some amount of variation between lots, however small, will be present even under the best manufacturing practices. Using data from a published lot consistency trial, we demonstrate that when the between-lot variation is only 0.5 per cent of the total variation, the increase in the sample size is nearly 300 per cent when compared with the size assuming that the lots are identical. The increase in the sample size is so pronounced that in order to maintain power one is led to consider a less stringent criterion for demonstration of lot consistency. The appropriate sample size formula that is a function of both components of variation is provided. We also discuss the increase in the sample size due to correlated comparisons arising from three pairs of lots as a function of the between-lot variance.

  3. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  4. A margin based approach to determining sample sizes via tolerance bounds.

    Energy Technology Data Exchange (ETDEWEB)

    Newcomer, Justin T.; Freeland, Katherine Elizabeth

    2013-09-01

    This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

  5. Sample size calculation for differential expression analysis of RNA-seq data under Poisson distribution.

    Science.gov (United States)

    Li, Chung-I; Su, Pei-Fang; Guo, Yan; Shyr, Yu

    2013-01-01

    Sample size determination is an important issue in the experimental design of biomedical research. Because of the complexity of RNA-seq experiments, however, the field currently lacks a sample size method widely applicable to differential expression studies utilising RNA-seq technology. In this report, we propose several methods for sample size calculation for single-gene differential expression analysis of RNA-seq data under Poisson distribution. These methods are then extended to multiple genes, with consideration for addressing the multiple testing problem by controlling false discovery rate. Moreover, most of the proposed methods allow for closed-form sample size formulas with specification of the desired minimum fold change and minimum average read count, and thus are not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size formulas are presented; the results indicate that our methods work well, with achievement of desired power. Finally, our sample size calculation methods are applied to three real RNA-seq data sets.

  6. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. Optimal sample sizes for Welch's test under various allocation and cost considerations.

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2011-12-01

    The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350-362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  8. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    Science.gov (United States)

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  9. SMALL SAMPLE SIZE IN 2X2 CROSS OVER DESIGNS: CONDITIONS OF DETERMINATION

    Directory of Open Access Journals (Sweden)

    B SOLEYMANI

    2001-09-01

    Full Text Available Introduction. Determination of small sample size in some clinical trials is a matter of importance. In cross-over studies which are one types of clinical trials, the matter is more significant. In this article, the conditions in which determination of small sample size in cross-over studies are possible were considered, and the effect of deviation from normality on the matter has been shown. Methods. The present study has been done on such 2x2 cross-over studies that variable of interest is quantitative one and is measurable by ratio or interval scale. The method of consideration is based on use of variable and sample mean"s distributions, central limit theorem, method of sample size determination in two groups, and cumulant or moment generating function. Results. In normal variables or transferable to normal variables, there is no restricting factors other than significant level and power of the test for determination of sample size, but in the case of non-normal variables, it should be determined such large that guarantee the normality of sample mean"s distribution. Discussion. In such cross over studies that because of existence of theoretical base, few samples can be computed, one should not do it without taking applied worth of results into consideration. While determining sample size, in addition to variance, it is necessary to consider distribution of variable, particularly through its skewness and kurtosis coefficients. the more deviation from normality, the more need of samples. Since in medical studies most of the continuous variables are closed to normal distribution, a few number of samples often seems to be adequate for convergence of sample mean to normal distribution.

  10. Classification of video sequences into chosen generalized use classes of target size and lighting level.

    Science.gov (United States)

    Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin

    The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.

  11. Radiation inactivation target size of rat adipocyte glucose transporters in the plasma membrane and intracellular pools

    Energy Technology Data Exchange (ETDEWEB)

    Jacobs, D.B.; Berenski, C.J.; Spangler, R.A.; Jung, C.Y.

    1987-06-15

    The in situ assembly states of the glucose transport carrier protein in the plasma membrane and in the intracellular (microsomal) storage pool of rat adipocytes were assessed by studying radiation-induced inactivation of the D-glucose-sensitive cytochalasin B binding activities. High energy radiation inactivated the glucose-sensitive cytochalasin B binding of each of these membrane preparations by reducing the total number of the binding sites without affecting the dissociation constant. The reduction in total number of binding sites was analyzed as a function of radiation dose based on target theory, from which a radiation-sensitive mass (target size) was calculated. When the plasma membranes of insulin-treated adipocytes were used, a target size of approximately 58,000 daltons was obtained. For adipocyte microsomal membranes, we obtained target sizes of approximately 112,000 and 109,000 daltons prior to and after insulin treatment, respectively. In the case of microsomal membranes, however, inactivation data showed anomalously low radiation sensitivities at low radiation doses, which may be interpreted as indicating the presence of a radiation-sensitive inhibitor. These results suggest that the adipocyte glucose transporter occurs as a monomer in the plasma membrane while existing in the intracellular reserve pool either as a homodimer or as a stoichiometric complex with a protein of an approximately equal size.

  12. Radiation inactivation target size of rat adipocyte glucose transporters in the plasma membrane and intracellular pools.

    Science.gov (United States)

    Jacobs, D B; Berenski, C J; Spangler, R A; Jung, C Y

    1987-06-15

    The in situ assembly states of the glucose transport carrier protein in the plasma membrane and in the intracellular (microsomal) storage pool of rat adipocytes were assessed by studying radiation-induced inactivation of the D-glucose-sensitive cytochalasin B binding activities. High energy radiation inactivated the glucose-sensitive cytochalasin B binding of each of these membrane preparations by reducing the total number of the binding sites without affecting the dissociation constant. The reduction in total number of binding sites was analyzed as a function of radiation dose based on target theory, from which a radiation-sensitive mass (target size) was calculated. When the plasma membranes of insulin-treated adipocytes were used, a target size of approximately 58,000 daltons was obtained. For adipocyte microsomal membranes, we obtained target sizes of approximately 112,000 and 109,000 daltons prior to and after insulin treatment, respectively. In the case of microsomal membranes, however, inactivation data showed anomalously low radiation sensitivities at low radiation doses, which may be interpreted as indicating the presence of a radiation-sensitive inhibitor. These results suggest that the adipocyte glucose transporter occurs as a monomer in the plasma membrane while existing in the intracellular reserve pool either as a homodimer or as a stoichiometric complex with a protein of an approximately equal size.

  13. Ideal Particle Sizes for Inhaled Steroids Targeting Vocal Granulomas: Preliminary Study Using Computational Fluid Dynamics.

    Science.gov (United States)

    Perkins, Elizabeth L; Basu, Saikat; Garcia, Guilherme J M; Buckmire, Robert A; Shah, Rupali N; Kimbell, Julia S

    2017-11-01

    Objectives Vocal fold granulomas are benign lesions of the larynx commonly caused by gastroesophageal reflux, intubation, and phonotrauma. Current medical therapy includes inhaled corticosteroids to target inflammation that leads to granuloma formation. Particle sizes of commonly prescribed inhalers range over 1 to 4 µm. The study objective was to use computational fluid dynamics to investigate deposition patterns over a range of particle sizes of inhaled corticosteroids targeting the larynx and vocal fold granulomas. Study Design Retrospective, case-specific computational study. Setting Tertiary academic center. Subjects/Methods A 3-dimensional anatomically realistic computational model of a normal adult airway from mouth to trachea was constructed from 3 computed tomography scans. Virtual granulomas of varying sizes and positions along the vocal fold were incorporated into the base model. Assuming steady-state, inspiratory, turbulent airflow at 30 L/min, computational fluid dynamics was used to simulate respiratory transport and deposition of inhaled corticosteroid particles ranging over 1 to 20 µm. Results Laryngeal deposition in the base model peaked for particle sizes 8 to 10 µm (2.8%-3.5%). Ideal sizes ranged over 6 to 10, 7 to 13, and 7 to 14 µm for small, medium, and large granuloma sizes, respectively. Glottic deposition was maximal at 10.8% for 9-µm-sized particles for the large posterior granuloma, 3 times the normal model (3.5%). Conclusion As the virtual granuloma size increased and the location became more posterior, glottic deposition and ideal particle size generally increased. This preliminary study suggests that inhalers with larger particle sizes, such as fluticasone propionate dry-powder inhaler, may improve laryngeal drug deposition. Most commercially available inhalers have smaller particles than suggested here.

  14. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  15. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  16. n4Studies: Sample Size Calculation for an Epidemiological Study on a Smart Device

    Directory of Open Access Journals (Sweden)

    Chetta Ngamjarus

    2016-05-01

    Full Text Available Objective: This study was to develop a sample size application (called “n4Studies” for free use on iPhone and Android devices and to compare sample size functions between n4Studies with other applications and software. Methods: Objective-C programming language was used to create the application for the iPhone OS (operating system while javaScript, jquery mobile, PhoneGap and jstat were used to develop it for Android phones. Other sample size applications were searched from the Apple app and Google play stores. The applications’ characteristics and sample size functions were collected. Spearman’s rank correlation was used to investigate the relationship between number of sample size functions and price. Results: “n4Studies” provides several functions for sample size and power calculations for various epidemiological study designs. It can be downloaded from the Apple App and Google play store. Comparing n4Studies with other applications, it covers several more types of epidemiological study designs, gives similar results for estimation of infinite/finite population mean and infinite/finite proportion from GRANMO, for comparing two independent means from BioStats, for comparing two independent proportions from EpiCal application. When using the same parameters, n4Studies gives similar results to STATA, epicalc package in R, PS, G*Power, and OpenEpi. Conclusion: “n4Studies” can be an alternative tool for calculating the sample size. It may be useful to students, lecturers and researchers in conducting their research projects.

  17. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  18. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  19. miR-11 regulates pupal size of Drosophila melanogaster via directly targeting Ras85D.

    Science.gov (United States)

    Li, Yao; Li, Shengjie; Jin, Ping; Chen, Liming; Ma, Fei

    2017-01-01

    MicroRNAs play diverse roles in various physiological processes during Drosophila development. In the present study, we reported that miR-11 regulates pupal size during Drosophila metamorphosis via targeting Ras85D with the following evidences: pupal size was increased in the miR-11 deletion mutant; restoration of miR-11 in the miR-11 deletion mutant rescued the increased pupal size phenotype observed in the miR-11 deletion mutant; ectopic expression of miR-11 in brain insulin-producing cells (IPCs) and whole body shows consistent alteration of pupal size; Dilps and Ras85D expressions were negatively regulated by miR-11 in vivo; miR-11 targets Ras85D through directly binding to Ras85D 3'-untranslated region in vitro; removal of one copy of Ras85D in the miR-11 deletion mutant rescued the increased pupal size phenotype observed in the miR-11 deletion mutant. Thus, our current work provides a novel mechanism of pupal size determination by microRNAs during Drosophila melanogaster metamorphosis. Copyright © 2017 the American Physiological Society.

  20. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

    Science.gov (United States)

    Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

    2013-08-01

    Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.

  1. Debba China presentation on optimal field sampling for exploration targets and geochemicals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available A presentation was done at the Chinese Academy of Geological Science in October 2008 on optimal field sampling for both exploration targets and sampling for geochemicals in mine tailing areas...

  2. [Explanation of samples sizes in current biomedical journals: an irrational requirement].

    Science.gov (United States)

    Silva Ayçaguer, Luis Carlos; Alonso Galbán, Patricia

    2013-01-01

    To discuss the theoretical relevance of current requirements for explanations of the sample sizes employed in published studies, and to assess the extent to which these requirements are currently met by authors and demanded by referees and editors. A literature review was conducted to gain insight into and critically discuss the possible rationale underlying the requirement of justifying sample sizes. A descriptive bibliometric study was then carried out based on the original studies published in the six journals with the highest impact factor in the field of health in 2009. All the arguments used to support the requirement of an explanation of sample sizes are feeble, and there are several reasons why they should not be endorsed. These instructions are neglected in most of the studies published in the current literature with the highest impact factor. In 56% (95%CI: 52-59) of the articles, the sample size used was not substantiated, and only 27% (95%CI: 23-30) met all the requirements contained in the guidelines adhered to by the journals studied. Based on this study, we conclude that there are no convincing arguments justifying the requirement for an explanation of how the sample size was reached in published articles. There is no sound basis for this requirement, which not only does not promote the transparency of research reports but rather contributes to undermining it. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.

  3. Sample Size for Assessing Agreement between Two Methods of Measurement by Bland-Altman Method.

    Science.gov (United States)

    Lu, Meng-Jie; Zhong, Wei-Hua; Liu, Yu-Xiu; Miao, Hua-Zhang; Li, Yong-Chang; Ji, Mu-Huo

    2016-11-01

    The Bland-Altman method has been widely used for assessing agreement between two methods of measurement. However, it remains unsolved about sample size estimation. We propose a new method of sample size estimation for Bland-Altman agreement assessment. According to the Bland-Altman method, the conclusion on agreement is made based on the width of the confidence interval for LOAs (limits of agreement) in comparison to predefined clinical agreement limit. Under the theory of statistical inference, the formulae of sample size estimation are derived, which depended on the pre-determined level of α, β, the mean and the standard deviation of differences between two measurements, and the predefined limits. With this new method, the sample sizes are calculated under different parameter settings which occur frequently in method comparison studies, and Monte-Carlo simulation is used to obtain the corresponding powers. The results of Monte-Carlo simulation showed that the achieved powers could coincide with the pre-determined level of powers, thus validating the correctness of the method. The method of sample size estimation can be applied in the Bland-Altman method to assess agreement between two methods of measurement.

  4. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  5. Exploratory factor analysis with small sample sizes: a comparison of three approaches.

    Science.gov (United States)

    Jung, Sunho

    2013-07-01

    Exploratory factor analysis (EFA) has emerged in the field of animal behavior as a useful tool for determining and assessing latent behavioral constructs. Because the small sample size problem often occurs in this field, a traditional approach, unweighted least squares, has been considered the most feasible choice for EFA. Two new approaches were recently introduced in the statistical literature as viable alternatives to EFA when sample size is small: regularized exploratory factor analysis and generalized exploratory factor analysis. A simulation study is conducted to evaluate the relative performance of these three approaches in terms of factor recovery under various experimental conditions of sample size, degree of overdetermination, and level of communality. In this study, overdetermination and sample size are the meaningful conditions in differentiating the performance of the three approaches in factor recovery. Specifically, when there are a relatively large number of factors, regularized exploratory factor analysis tends to recover the correct factor structure better than the other two approaches. Conversely, when few factors are retained, unweighted least squares tends to recover the factor structure better. Finally, generalized exploratory factor analysis exhibits very poor performance in factor recovery compared to the other approaches. This tendency is particularly prominent as sample size increases. Thus, generalized exploratory factor analysis may not be a good alternative to EFA. Regularized exploratory factor analysis is recommended over unweighted least squares unless small expected number of factors is ensured. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. A simulation study provided sample size guidance for differential item functioning (DIF) studies using short scales.

    Science.gov (United States)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Gundy, Chad; Koller, Michael; Petersen, Morten A; Sprangers, Mirjam A G

    2009-03-01

    Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal logistic regression. Simulated data, representative of HRQoL scales with four-category items, were generated. The power and type I error rates of the DIF method were then investigated when, respectively, DIF was deliberately introduced and when no DIF was added. The sample size, scale length, floor effects (FEs) and significance level were varied. When there was no DIF, type I error rates were close to 5%. Detecting moderate uniform DIF in a two-item scale required a sample size of 300 per group for adequate (>80%) power. For longer scales, a sample size of 200 was adequate. Considerably larger sample sizes were required to detect nonuniform DIF, when there were extreme FEs or when a reduced type I error rate was required. The impact of the number of items in the scale was relatively small. Ordinal logistic regression successfully detects DIF for HRQoL instruments with short scales. Sample size guidelines are provided.

  7. Target size of calcium pump protein from skeletal muscle sarcoplasmic reticulum.

    Science.gov (United States)

    Hymel, L; Maurer, A; Berenski, C; Jung, C Y; Fleischer, S

    1984-04-25

    The oligomeric size of calcium pump protein (CPP) in fast skeletal muscle sarcoplasmic reticulum membrane was determined using target theory analysis of radiation inactivation data. There was a parallel decrease of Ca2+-ATPase and calcium pumping activities with increasing radiation dose. The loss of staining intensity of the CPP band, observed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis, also correlated directly with the loss of activity. The target size molecular weight of the CPP in the normal sarcoplasmic reticulum membrane ranged between 210,000 and 250,000, which is consistent with a dimeric structure. Essentially the same size is obtained for the non-phosphorylated CPP or for the phosphoenzyme form generated from either ATP (E1 state) or inorganic phosphate (E2 state). Hence, the oligomeric state of the pump does not appear to change during the catalytic cycle. Similar results were obtained with reconstituted sarcoplasmic reticulum membrane vesicles with different lipid to protein ratios. We conclude that the CPP is a dimer in both native and reconstituted sarcoplasmic reticulum membranes. The target size of the calcium-binding protein (calsequestrin) was found to be 50,000 daltons, approximating a monomer.

  8. Monte Carlo approaches for determining power and sample size in low-prevalence applications.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D; Wagner, Bruce A

    2007-11-15

    The prevalence of disease in many populations is often low. For example, the prevalence of tuberculosis, brucellosis, and bovine spongiform encephalopathy range from 1 per 100,000 to less than 1 per 1,000,000 in many countries. When an outbreak occurs, epidemiological investigations often require comparing the prevalence in an exposed population with that of an unexposed population. To determine if the level of disease in the two populations is significantly different, the epidemiologist must consider the test to be used, desired power of the test, and determine the appropriate sample size for both the exposed and unexposed populations. Commonly available software packages provide estimates of the required sample sizes for this application. This study shows that these estimated sample sizes can exceed the necessary number of samples by more than 35% when the prevalence is low. We provide a Monte Carlo-based solution and show that in low-prevalence applications this approach can lead to reductions in the total samples size of more than 10,000 samples.

  9. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  10. [On the impact of sample size calculation and power in clinical research].

    Science.gov (United States)

    Held, Ulrike

    2014-10-01

    The aim of a clinical trial is to judge the efficacy of a new therapy or drug. In the planning phase of the study, the calculation of the necessary sample size is crucial in order to obtain a meaningful result. The study design, the expected treatment effect in outcome and its variability, power and level of significance are factors which determine the sample size. It is often difficult to fix these parameters prior to the start of the study, but related papers from the literature can be helpful sources for the unknown quantities. For scientific as well as ethical reasons it is necessary to calculate the sample size in advance in order to be able to answer the study question.

  11. Species-genetic diversity correlations in habitat fragmentation can be biased by small sample sizes.

    Science.gov (United States)

    Nazareno, Alison G; Jump, Alistair S

    2012-06-01

    Predicted parallel impacts of habitat fragmentation on genes and species lie at the core of conservation biology, yet tests of this rule are rare. In a recent article in Ecology Letters, Struebig et al. (2011) report that declining genetic diversity accompanies declining species diversity in tropical forest fragments. However, this study estimates diversity in many populations through extrapolation from very small sample sizes. Using the data of this recent work, we show that results estimated from the smallest sample sizes drive the species-genetic diversity correlation (SGDC), owing to a false-positive association between habitat fragmentation and loss of genetic diversity. Small sample sizes are a persistent problem in habitat fragmentation studies, the results of which often do not fit simple theoretical models. It is essential, therefore, that data assessing the proposed SGDC are sufficient in order that conclusions be robust.

  12. Threshold-dependent sample sizes for selenium assessment with stream fish tissue.

    Science.gov (United States)

    Hitt, Nathaniel P; Smith, David R

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α=0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites

  13. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  14. Particle size effect on velocity of gold particle embedded laser driven plastic targets

    Directory of Open Access Journals (Sweden)

    Dhareshwar L.J.

    2013-11-01

    Full Text Available A scheme to enhance the target foil velocity has been investigated for a direct drive inertial fusion target. Polymer PVA (polyvinyl alcohol or (C2H4On target foils of thickness 15–20 μm were used in plain form and also embedded with gold in the nano-particle (Au-np or micro-particle (Au-mp form. Nano-particles were of 20–50 nm and micro-particles of 2–3 μm in size. 17% higher target velocity was measured for foils embedded with nano-particle gold (Au-np as compared to targets embedded with micro-particles gold (Au-mp. The weight of gold in both cases was in the range 40–55% of the full target weight (atomic percentage of about 22%. Experiments were performed with the single beam of the Prague Asterix Laser System (PALS at 0.43 μm wavelength (3ω of the fundamental wavelength, 120 Joule energy and 300 psec pulse duration. Laser intensity on the target was about 1015 W/cm2. A simple model has been proposed to explain the experimental results.

  15. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    Science.gov (United States)

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  16. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  17. Optimal sample size determinations from an industry perspective based on the expected value of information.

    Science.gov (United States)

    Willan, Andrew R

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as type I and II errors. As an alternative, taking a societal perspective, and using the expected value of information based on Bayesian decision theory, a number of authors have recently shown how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of the trial and the value of the information gained from the results. Other authors have proposed Bayesian methods to determine sample sizes from an industry perspective. The purpose of this article is to propose a Bayesian approach to sample size calculations from an industry perspective that attempts to determine the sample size that maximizes expected profit. A model is proposed for expected total profit that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, discount rate, and the relationship between the results and the probability of regulatory approval. The expected value of information provided by trial data is related to the increase in expected profit from increasing the probability of regulatory approval. The methods are applied to an example, including an examination of robustness. The model is extended to consider market share as a function of observed treatment effect. The use of methods based on the expected value of information can provide, from an industry perspective, robust sample size solutions that maximize the difference between the expected cost of the trial and the expected value of information gained from the results. The method is only as good as the model for expected total profit. Although the model probably has all the right elements, it assumes that market share, per-patient profit, and incidence are insensitive to trial results. The method relies on the central limit theorem which assumes that the sample sizes involved ensure that the relevant test statistics

  18. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  19. Determining optimal sample sizes for multi-stage randomized clinical trials using value of information methods.

    Science.gov (United States)

    Willan, Andrew; Kowgier, Matthew

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.

  20. A simulation-based sample size calculation method for pre-clinical tumor xenograft experiments.

    Science.gov (United States)

    Wu, Jianrong; Yang, Shengping

    2017-04-07

    Pre-clinical tumor xenograft experiments usually require a small sample size that is rarely greater than 20, and data generated from such experiments very often do not have censored observations. Many statistical tests can be used for analyzing such data, but most of them were developed based on large sample approximation. We demonstrate that the type-I error rates of these tests can substantially deviate from the designated rate, especially when the data to be analyzed has a skewed distribution. Consequently, the sample size calculated based on these tests can be erroneous. We propose a modified signed log-likelihood ratio test (MSLRT) to meet the type-I error rate requirement for analyzing pre-clinical tumor xenograft data. The MSLRT has a consistent and symmetric type-I error rate that is very close to the designated rate for a wide range of sample sizes. By simulation, we generated a series of sample size tables based on scenarios commonly expected in tumor xenograft experiments, and we expect that these tables can be used as guidelines for making decisions on the numbers of mice used in tumor xenograft experiments.

  1. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  2. Sample size bounding and context ranking as approaches to the HRA data problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, Bernhard

    2004-02-01

    This paper presents a technique denoted as sub sample size bounding (SSSB) useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications for human reliability analysis (HRA) are emphasized in the presentation of the technique. Exemplified by a sample of 180 abnormal event sequences, it is outlined how SSSB can provide viable input for the quantification of errors of commission (EOCs)

  3. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  4. SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA

    OpenAIRE

    Faghihzadeh, S.; M. Rahgozar

    2003-01-01

    Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the resul...

  5. Concurrent directional adaptation of reactive saccades and hand movements to target displacements of different size.

    Science.gov (United States)

    Borisova, Steliana; Bock, Otmar; Grigorova, Valentina

    2014-01-01

    When eye and hand movements are concurrently aimed at double-step targets that call for equal and opposite changes of response direction (-10° for the eyes, +10° for the hand), adaptive recalibration of both motor systems is strongly attenuated; instead, hand but not eye movements are changed by corrective strategies (V. Grigorova et al., 2013a). The authors introduce a complementary paradigm, where double-step targets call for a -10° change of eye and a -30° change for hand movements. If compared to control subjects adapting only the eyes or only the hand, adaptive improvements were comparable for the eyes but were twice as large for the hand; in contrast, eye and hand aftereffects were comparable to those in control subjects. The authors concluded that concurrent exposure of eyes and hand to steps of the same direction but different size facilitated hand strategies, but didn't affect recalibration. This finding together with previous one (V. Grigorova et al., 2013a), suggests that concurrent adaptation of eyes and hand reveals different mechanisms of recalibration for step sign and step size, which are shared by reactive saccades and hand movements. However, hand mostly benefits from strategies provoked by the difference in target step sign and size.

  6. SCF(SAP) controls organ size by targeting PPD proteins for degradation in Arabidopsis thaliana.

    Science.gov (United States)

    Wang, Zhibiao; Li, Na; Jiang, Shan; Gonzalez, Nathalie; Huang, Xiahe; Wang, Yingchun; Inzé, Dirk; Li, Yunhai

    2016-04-06

    Control of organ size by cell proliferation and growth is a fundamental process, but the mechanisms that determine the final size of organs are largely elusive in plants. We have previously revealed that the ubiquitin receptor DA1 regulates organ size by repressing cell proliferation in Arabidopsis. Here we report that a mutant allele of STERILE APETALA (SAP) suppresses the da1-1 mutant phenotype. We show that SAP is an F-box protein that forms part of a SKP1/Cullin/F-box E3 ubiquitin ligase complex and controls organ size by promoting the proliferation of meristemoid cells. Genetic analyses suggest that SAP may act in the same pathway with PEAPOD1 and PEAPOD2, which are negative regulators of meristemoid proliferation, to control organ size, but does so independently of DA1. Further results reveal that SAP physically associates with PEAPOD1 and PEAPOD2, and targets them for degradation. These findings define a molecular mechanism by which SAP and PEAPOD control organ size.

  7. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  8. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  9. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  10. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  11. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  12. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    Science.gov (United States)

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  13. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    Science.gov (United States)

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  14. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    Science.gov (United States)

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  15. Precise confidence intervals of regression-based reference limits: Method comparisons and sample size requirements.

    Science.gov (United States)

    Shieh, Gwowen

    2017-12-01

    Covariate-dependent reference limits have been extensively applied in biology and medicine for determining the substantial magnitude and relative importance of quantitative measurements. Confidence interval and sample size procedures are available for studying regression-based reference limits. However, the existing popular methods employ different technical simplifications and are applicable only in certain limited situations. This paper describes exact confidence intervals of regression-based reference limits and compares the exact approach with the approximate methods under a wide range of model configurations. Using the ratio between the widths of confidence interval and reference interval as the relative precision index, optimal sample size procedures are presented for precise interval estimation under expected ratio and tolerance probability considerations. Simulation results show that the approximate interval methods using normal distribution have inaccurate confidence limits. The exact confidence intervals dominate the approximate procedures in one- and two-sided coverage performance. Unlike the current simplifications, the proposed sample size procedures integrate all key factors including covariate features in the optimization process and are suitable for various regression-based reference limit studies with potentially diverse configurations. The exact interval estimation has theoretical and practical advantages over the approximate methods. The corresponding sample size procedures and computing algorithms are also presented to facilitate the data analysis and research design of regression-based reference limits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  17. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  18. Influence of tree spatial pattern and sample plot type and size on inventory

    Science.gov (United States)

    John-Pascall Berrill; Kevin L. O' Hara

    2012-01-01

    Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

  19. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  20. Estimating the Size of a Large Network and its Communities from a Random Sample.

    Science.gov (United States)

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  1. Sample size calculations for evaluating treatment policies in multi-stage designs.

    Science.gov (United States)

    Dawson, Ree; Lavori, Philip W

    2010-12-01

    Sequential multiple assignment randomized (SMAR) designs are used to evaluate treatment policies, also known as adaptive treatment strategies (ATS). The determination of SMAR sample sizes is challenging because of the sequential and adaptive nature of ATS, and the multi-stage randomized assignment used to evaluate them. We derive sample size formulae appropriate for the nested structure of successive SMAR randomizations. This nesting gives rise to ATS that have overlapping data, and hence between-strategy covariance. We focus on the case when covariance is substantial enough to reduce sample size through improved inferential efficiency. Our design calculations draw upon two distinct methodologies for SMAR trials, using the equality of the optimal semi-parametric and Bayesian predictive estimators of standard error. This 'hybrid' approach produces a generalization of the t-test power calculation that is carried out in terms of effect size and regression quantities familiar to the trialist. Simulation studies support the reasonableness of underlying assumptions as well as the adequacy of the approximation to between-strategy covariance when it is substantial. Investigation of the sensitivity of formulae to misspecification shows that the greatest influence is due to changes in effect size, which is an a priori clinical judgment on the part of the trialist. We have restricted simulation investigation to SMAR studies of two and three stages, although the methods are fully general in that they apply to 'K-stage' trials. Practical guidance is needed to allow the trialist to size a SMAR design using the derived methods. To this end, we define ATS to be 'distinct' when they differ by at least the (minimal) size of effect deemed to be clinically relevant. Simulation results suggest that the number of subjects needed to distinguish distinct strategies will be significantly reduced by adjustment for covariance only when small effects are of interest.

  2. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  3. Percolating macropore networks in tilled topsoil: effects of sample size, minimum pore thickness and soil type

    Science.gov (United States)

    Jarvis, Nicholas; Larsbo, Mats; Koestel, John; Keck, Hannes

    2017-04-01

    The long-range connectivity of macropore networks may exert a strong control on near-saturated and saturated hydraulic conductivity and the occurrence of preferential flow through soil. It has been suggested that percolation concepts may provide a suitable theoretical framework to characterize and quantify macropore connectivity, although this idea has not yet been thoroughly investigated. We tested the applicability of percolation concepts to describe macropore networks quantified by X-ray scanning at a resolution of 0.24 mm in eighteen cylinders (20 cm diameter and height) sampled from the ploughed layer of four soils of contrasting texture in east-central Sweden. The analyses were performed for sample sizes ("regions of interest", ROI) varying between 3 and 12 cm in cube side-length and for minimum pore thicknesses ranging between image resolution and 1 mm. Finite sample size effects were clearly found for ROI's of cube side-length smaller than ca. 6 cm. For larger sample sizes, the results showed the relevance of percolation concepts to soil macropore networks, with a close relationship found between imaged porosity and the fraction of the pore space which percolated (i.e. was connected from top to bottom of the ROI). The percolating fraction increased rapidly as a function of porosity above a small percolation threshold (1-4%). This reflects the ordered nature of the pore networks. The percolation relationships were similar for all four soils. Although pores larger than 1 mm appeared to be somewhat better connected, only small effects of minimum pore thickness were noted across the range of tested pore sizes. The utility of percolation concepts to describe the connectivity of more anisotropic macropore networks (e.g. in subsoil horizons) should also be tested, although with current X-ray scanning equipment it may prove difficult in many cases to analyze sufficiently large samples that would avoid finite size effects.

  4. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  5. Performance of a reciprocal shaker in mechanical dispersion of soil samples for particle-size analysis

    Directory of Open Access Journals (Sweden)

    Thayse Aparecida Dourado

    2012-08-01

    Full Text Available The dispersion of the samples in soil particle-size analysis is a fundamental step, which is commonly achieved with a combination of chemical agents and mechanical agitation. The purpose of this study was to evaluate the efficiency of a low-speed reciprocal shaker for the mechanical dispersion of soil samples of different textural classes. The particle size of 61 soil samples was analyzed in four replications, using the pipette method to determine the clay fraction and sieving to determine coarse, fine and total sand fractions. The silt content was obtained by difference. To evaluate the performance, the results of the reciprocal shaker (RSh were compared with data of the same soil samples available in reports of the Proficiency testing for Soil Analysis Laboratories of the Agronomic Institute of Campinas (Prolab/IAC. The accuracy was analyzed based on the maximum and minimum values defining the confidence intervals for the particle-size fractions of each soil sample. Graphical indicators were also used for data comparison, based on dispersion and linear adjustment. The descriptive statistics indicated predominantly low variability in more than 90 % of the results for sand, medium-textured and clay samples, and for 68 % of the results for heavy clay samples, indicating satisfactory repeatability of measurements with the RSh. Medium variability was frequently associated with silt, followed by the fine sand fraction. The sensitivity analyses indicated an accuracy of 100 % for the three main separates (total sand, silt and clay, in all 52 samples of the textural classes heavy clay, clay and medium. For the nine sand soil samples, the average accuracy was 85.2 %; highest deviations were observed for the silt fraction. In relation to the linear adjustments, the correlation coefficients of 0.93 (silt or > 0.93 (total sand and clay, as well as the differences between the angular coefficients and the unit < 0.16, indicated a high correlation between the

  6. Backward Planetary Protection Issues and Possible Solutions for Icy Plume Sample Return Missions from Astrobiological Targets

    Science.gov (United States)

    Yano, Hajime; McKay, Christopher P.; Anbar, Ariel; Tsou, Peter

    ). While this is an ideal specification, it far exceeds the current PPP requirements for Category-V “restricted Earth return”, which typically center on a probability of escape of a biologically active particle (e.g., 50 nm diameter). Particles of this size (orders of magnitude larger than a helium atom) are not volatile and generally “sticky” toward surfaces; the mobility of viruses and biomolecules requires aerosolization. Thus, meeting the planetary protection challenge does not require hermetic seal. So far, only a handful of robotic missions accomplished deep space sample returns, i.e., Genesis, Stardust and Hayabusa. This year, Hayabusa-2 will be launched and OSIRIS-REx will follow in a few years. All of these missions are classified as “unrestricted Earth return” by the COSPAR PPP recommendation. Nevertheless, scientific requirements of organic contamination control have been implemented to all WBS regarding sampling mechanism and Earth return capsule of Hayabusa-2. While Genesis, Stardust and OSIRIS-REx capsules “breathe” terrestrial air as they re-enter Earth’s atmosphere, temporal “air-tight” design was already achieved by the Hayabusa-1 sample container using a double O-ring seal, and that for the Hayabusa-2 will retain noble gas and other released gas from returned solid samples using metal seal technology. After return, these gases can be collected through a filtered needle interface without opening the entire container lid. This expertise can be extended to meeting planetary protection requirements from “restricted return” targets. There are still some areas requiring new innovations, especially to assure contingency robustness in every phase of a return mission. These must be achieved by meeting both PPP and scientific requirements during initial design and WBS of the integrated sampling system including the Earth return capsule. It is also important to note that international communities in planetary protection, sample return

  7. B-Graph Sampling to Estimate the Size of a Hidden Population

    Directory of Open Access Journals (Sweden)

    Spreen Marinus

    2015-12-01

    Full Text Available Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is respondent-driven sampling in which no sampling frame is used. However, in some studies multiple but incomplete sampling frames are available. In this article, we introduce the B-graph design that can be used in such situations. In this design, all available incomplete sampling frames are joined and turned into one sampling frame, from which a random sample is drawn and selected respondents are asked to mention their contacts. By considering the population as a bipartite graph of a two-mode network (those from the sampling frame and those who are not on the frame, the number of respondents who are directly linked to the sampling frame members can be estimated using Chao’s and Zelterman’s estimators for sparse data. The B-graph sampling design is illustrated using the data of a social network study from Utrecht, the Netherlands.

  8. Planosol soil sample size for computerized tomography measurement of physical parameters

    Directory of Open Access Journals (Sweden)

    Pedrotti Alceu

    2003-01-01

    Full Text Available Computerized tomography (CT is an important tool in Soil Science for noninvasive measurement of density and water content of soil samples. This work aims to describe the aspects of sample size adequacy for Planosol (Albaqualf and to evaluate procedures for statistical analysis, using a CT scanner with a 241Am source. Density errors attributed to the equipment are 0.051 and 0.046 Mg m-3 for horizons A and B, respectively. The theoretical value for sample thickness for the Planosol, using this equipment, is 4.0 cm for the horizons A and B. The ideal thickness of samples is approximately 6.0 cm, being smaller for samples of the horizon B in relation to A. Alternatives for the improvement of the efficiency analysis and the reliability of the results obtained by CT are also discussed, and indicate good precision and adaptability of the application of this technology in Planosol (Albaqualf studies.

  9. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  10. Sample size calculation for microarray experiments with blocked one-way design

    Directory of Open Access Journals (Sweden)

    Jung Sin-Ho

    2009-05-01

    Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.

  11. Origin of discrepancies between crater size-frequency distributions of coeval lunar geologic units via target property contrasts

    Science.gov (United States)

    Van der Bogert, Carolyn H.; Hiesinger, Harald; Dundas, Colin M.; Kruger, T.; McEwen, Alfred S.; Zanetti, Michael; Robinson, Mark S.

    2017-01-01

    Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the

  12. Origin of discrepancies between crater size-frequency distributions of coeval lunar geologic units via target property contrasts

    Science.gov (United States)

    van der Bogert, C. H.; Hiesinger, H.; Dundas, C. M.; Krüger, T.; McEwen, A. S.; Zanetti, M.; Robinson, M. S.

    2017-12-01

    Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the

  13. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    Science.gov (United States)

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    Science.gov (United States)

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  15. The role of the upper sample size limit in two-stage bioequivalence designs.

    Science.gov (United States)

    Karalis, Vangelis

    2013-11-01

    Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Sample size allocation for food item radiation monitoring and safety inspection.

    Science.gov (United States)

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  17. The effects of focused transducer geometry and sample size on the measurement of ultrasonic transmission properties

    Science.gov (United States)

    Atkins, T. J.; Humphrey, V. F.; Duck, F. A.; Tooley, M. A.

    2011-02-01

    The response of two coaxially aligned weakly focused ultrasonic transducers, typical of those employed for measuring the attenuation of small samples using the immersion method, has been investigated. The effects of the sample size on transmission measurements have been analyzed by integrating the sound pressure distribution functions of the radiator and receiver over different limits to determine the size of the region that contributes to the system response. The results enable the errors introduced into measurements of attenuation to be estimated as a function of sample size. A theoretical expression has been used to examine how the transducer separation affects the receiver output. The calculations are compared with an experimental study of the axial response of three unpaired transducers in water. The separation of each transducer pair giving the maximum response was determined, and compared with the field characteristics of the individual transducers. The optimum transducer separation, for accurate estimation of sample properties, was found to fall between the sum of the focal distances and the sum of the geometric focal lengths as this reduced diffraction errors.

  18. A simple method for estimating genetic diversity in large populations from finite sample sizes

    Directory of Open Access Journals (Sweden)

    Rajora Om P

    2009-12-01

    Full Text Available Abstract Background Sample size is one of the critical factors affecting the accuracy of the estimation of population genetic diversity parameters. Small sample sizes often lead to significant errors in determining the allelic richness, which is one of the most important and commonly used estimators of genetic diversity in populations. Correct estimation of allelic richness in natural populations is challenging since they often do not conform to model assumptions. Here, we introduce a simple and robust approach to estimate the genetic diversity in large natural populations based on the empirical data for finite sample sizes. Results We developed a non-linear regression model to infer genetic diversity estimates in large natural populations from finite sample sizes. The allelic richness values predicted by our model were in good agreement with those observed in the simulated data sets and the true allelic richness observed in the source populations. The model has been validated using simulated population genetic data sets with different evolutionary scenarios implied in the simulated populations, as well as large microsatellite and allozyme experimental data sets for four conifer species with contrasting patterns of inherent genetic diversity and mating systems. Our model was a better predictor for allelic richness in natural populations than the widely-used Ewens sampling formula, coalescent approach, and rarefaction algorithm. Conclusions Our regression model was capable of accurately estimating allelic richness in natural populations regardless of the species and marker system. This regression modeling approach is free from assumptions and can be widely used for population genetic and conservation applications.

  19. Dynamic modulation of illusory and physical target size on separate and coordinated eye and hand movements.

    Science.gov (United States)

    Gamble, Christine M; Song, Joo-Hyun

    2017-03-01

    In everyday behavior, two of the most common visually guided actions-eye and hand movements-can be performed independently, but are often synergistically coupled. In this study, we examine whether the same visual representation is used for different stages of saccades and pointing, namely movement preparation and execution, and whether this usage is consistent between independent and naturalistic coordinated eye and hand movements. To address these questions, we used the Ponzo illusion to dissociate the perceived and physical sizes of visual targets and measured the effects on movement preparation and execution for independent and coordinated saccades and pointing. During independent movements, we demonstrated that both physically and perceptually larger targets produced faster preparation for both effectors. Furthermore, participants who showed a greater influence of the illusion on saccade preparation also showed a greater influence on pointing preparation, suggesting that a shared mechanism involved in preparation across effectors is influenced by illusions. However, only physical but not perceptual target sizes influenced saccade and pointing execution. When pointing was coordinated with saccades, we observed different dynamics: pointing no longer showed modulation from illusory size, while saccades showed illusion modulation for both preparation and execution. Interestingly, in independent and coordinated movements, the illusion modulated saccade preparation more than pointing preparation, with this effect more pronounced during coordination. These results suggest a shared mechanism, dominated by the eyes, may underlie visually guided action preparation across effectors. Furthermore, the influence of illusions on action may operate within such a mechanism, leading to dynamic interactions between action modalities based on task demands.

  20. Limitations of mRNA amplification from small-size cell samples

    Directory of Open Access Journals (Sweden)

    Myklebost Ola

    2005-10-01

    Full Text Available Abstract Background Global mRNA amplification has become a widely used approach to obtain gene expression profiles from limited material. An important concern is the reliable reflection of the starting material in the results obtained. This is especially important with extremely low quantities of input RNA where stochastic effects due to template dilution may be present. This aspect remains under-documented in the literature, as quantitative measures of data reliability are most often lacking. To address this issue, we examined the sensitivity levels of each transcript in 3 different cell sample sizes. ANOVA analysis was used to estimate the overall effects of reduced input RNA in our experimental design. In order to estimate the validity of decreasing sample sizes, we examined the sensitivity levels of each transcript by applying a novel model-based method, TransCount. Results From expression data, TransCount provided estimates of absolute transcript concentrations in each examined sample. The results from TransCount were used to calculate the Pearson correlation coefficient between transcript concentrations for different sample sizes. The correlations were clearly transcript copy number dependent. A critical level was observed where stochastic fluctuations became significant. The analysis allowed us to pinpoint the gene specific number of transcript templates that defined the limit of reliability with respect to number of cells from that particular source. In the sample amplifying from 1000 cells, transcripts expressed with at least 121 transcripts/cell were statistically reliable and for 250 cells, the limit was 1806 transcripts/cell. Above these thresholds, correlation between our data sets was at acceptable values for reliable interpretation. Conclusion These results imply that the reliability of any amplification experiment must be validated empirically to justify that any gene exists in sufficient quantity in the input material. This

  1. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    scores in a number of estimators (the above-mentioned plus ICE, Chao2, Michaelis-Menten, Negative Exponential and Clench). The estimations from those four sample sizes were also highly correlated. 4.  Contrary to other studies, we conclude that most species richness estimators may be useful......Fifteen species richness estimators (three asymptotic based on species accumulation curves, 11 nonparametric, and one based in the species-area relationship) were compared by examining their performance in estimating the total species richness of epigean arthropods in the Azorean Laurisilva forests...... different sampling units on species richness estimations. 2.  Estimated species richness scores depended both on the estimator considered and on the grain size used to aggregate data. However, several estimators (ACE, Chao1, Jackknife1 and 2 and Bootstrap) were precise in spite of grain variations. Weibull...

  2. Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test

    Science.gov (United States)

    Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke

    Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.

  3. Estimating species and size composition of rockfishes to verify targets in acoustic surveys of untrawlable areas

    OpenAIRE

    Rooper, Christopher N.; Martin, Michael H.; Butler, John L.; Jones, Darin T.; Zimmermann, Mark

    2012-01-01

    Rockfish (Sebastes spp.) biomass is difficult to assess with standard bottom trawl or acoustic surveys because of their propensity to aggregate near the seafloor in highrelief areas that are inaccessible to sampling by trawling. We compared the ability of a remotely operated vehicle (ROV), a modified bottom trawl, and a stereo drop camera system (SDC) to identify rockfish species and estimate their size composition. The ability to discriminate species was highest for the bottom trawl...

  4. Decision rules and associated sample size planning for regional approval utilizing multiregional clinical trials.

    Science.gov (United States)

    Chen, Xiaoyuan; Lu, Nelson; Nair, Rajesh; Xu, Yunling; Kang, Cailian; Huang, Qin; Li, Ning; Chen, Hongzhuan

    2012-09-01

    Multiregional clinical trials provide the potential to make safe and effective medical products simultaneously available to patients globally. As regulatory decisions are always made in a local context, this poses huge regulatory challenges. In this article we propose two conditional decision rules that can be used for medical product approval by local regulatory agencies based on the results of a multiregional clinical trial. We also illustrate sample size planning for such trials.

  5. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    OpenAIRE

    Mark Heckmann; Lukas Burk

    2017-01-01

    The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...

  6. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    OpenAIRE

    Bruno Giacomini Sari; Alessandro Dal’Col Lúcio; Cinthya Souza Santana; Dionatan Ketzer Krysczun; André Luís Tischler; Lucas Drebes

    2017-01-01

    ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix b...

  7. Epidemiological Studies Based on Small Sample Sizes – A Statistician's Point of View

    OpenAIRE

    Ersbøll Annette; Ersbøll Bjarne

    2003-01-01

    We consider 3 basic steps in a study, which have relevance for the statistical analysis. They are: study design, data quality, and statistical analysis. While statistical analysis is often considered an important issue in the literature and the choice of statistical method receives much attention, less emphasis seems to be put on study design and necessary sample sizes. Finally, a very important step, namely assessment and validation of the quality of the data collected seems to be completel...

  8. Sample size calculations for randomised trials including both independent and paired data.

    Science.gov (United States)

    Yelland, Lisa N; Sullivan, Thomas R; Price, David J; Lee, Katherine J

    2017-04-15

    Randomised trials including a mixture of independent and paired data arise in many areas of health research, yet methods for determining the sample size for such trials are lacking. We derive design effects algebraically assuming clustering because of paired data will be taken into account in the analysis using generalised estimating equations with either an independence or exchangeable working correlation structure. Continuous and binary outcomes are considered, along with three different methods of randomisation: cluster randomisation, individual randomisation and randomisation to opposite treatment groups. The design effect is shown to depend on the intracluster correlation coefficient, proportion of observations belonging to a pair, working correlation structure, type of outcome and method of randomisation. The derived design effects are validated through simulation and example calculations are presented to illustrate their use in sample size planning. These design effects will enable appropriate sample size calculations to be performed for future randomised trials including both independent and paired data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Sample size determinations for Welch's test in one-way heteroscedastic ANOVA.

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2014-02-01

    For one-way fixed effects ANOVA, it is well known that the conventional F test of the equality of means is not robust to unequal variances, and numerous methods have been proposed for dealing with heteroscedasticity. On the basis of extensive empirical evidence of Type I error control and power performance, Welch's procedure is frequently recommended as the major alternative to the ANOVA F test under variance heterogeneity. To enhance its practical usefulness, this paper considers an important aspect of Welch's method in determining the sample size necessary to achieve a given power. Simulation studies are conducted to compare two approximate power functions of Welch's test for their accuracy in sample size calculations over a wide variety of model configurations with heteroscedastic structures. The numerical investigations show that Levy's (1978a) approach is clearly more accurate than the formula of Luh and Guo (2011) for the range of model specifications considered here. Accordingly, computer programs are provided to implement the technique recommended by Levy for power calculation and sample size determination within the context of the one-way heteroscedastic ANOVA model. © 2013 The British Psychological Society.

  10. Estimating the Size of a Large Network and its Communities from a Random Sample

    CERN Document Server

    Chen, Lin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...

  11. A Bayesian adaptive blinded sample size adjustment method for risk differences.

    Science.gov (United States)

    Hartley, Andrew Montgomery

    2015-01-01

    Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non-inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Sample Size and Probability Threshold Considerations with the Tailored Data Method.

    Science.gov (United States)

    Wyse, Adam E

    This article discusses sample size and probability threshold considerations in the use of the tailored data method with the Rasch model. In the tailored data method, one performs an initial Rasch analysis and then reanalyzes data after setting item responses to missing that are below a chosen probability threshold. A simple analytical formula is provided that can be used to check whether or not the application of the tailored data method with a chosen probability threshold will create situations in which the number of remaining item responses for the Rasch calibration will or will not meet minimum sample size requirements. The formula is illustrated using a real data example from a medical imaging licensure exam with several different probability thresholds. It is shown that as the probability threshold was increased more item responses were set to missing and the parameter standard errors and item difficulty estimates also tended to increase. It is suggested that some consideration should be given to the chosen probability threshold and how this interacts with potential examinee sample sizes and the accuracy of parameter estimates when calibrating data with the tailored data method.

  13. SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA

    Directory of Open Access Journals (Sweden)

    S FAGHIHZADEH

    2003-06-01

    Full Text Available Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the result of sample size determination in non-randomnized surval analysis with censored and non -censored data. Methods: In non-randomnized survival studies, linear regression with fixed -effect variable could be used. In fact such a regression is conditional expectation of dependent variable, conditioned on independent variable. Likelihood fuction with exponential hazard constructed by considering binary variable for allocation of each subject to one of two comparing groups, stating the variance of coefficient of fixed - effect independent variable by determination coefficient , sample size determination formulas are obtained with both censored and non-cencored data. So estimation of sample size is not based on the relation of a single independent variable but it could be attain the required power for a test adjusted for effect of the other explanatory covariates. Since the asymptotic distribution of the likelihood estimator of parameter is normal, we obtained the variance of the regression coefficient estimator formula then by stating the variance of regression coefficient of fixed-effect variable, by determination coefficient we derived formulas for determination of sample size in both censored and non-censored data. Results: In no-randomnized survival analysis ,to compare hazard rates of two groups without censored data, we obtained an estimation of determination coefficient ,risk ratio and proportion of membership to each group and their variances from

  14. Infrared small target tracking based on sample constrained particle filtering and sparse representation

    Science.gov (United States)

    Zhang, Xiaomin; Ren, Kan; Wan, Minjie; Gu, Guohua; Chen, Qian

    2017-12-01

    Infrared search and track technology for small target plays an important role in infrared warning and guidance. In view of the tacking randomness and uncertainty caused by background clutter and noise interference, a robust tracking method for infrared small target based on sample constrained particle filtering and sparse representation is proposed in this paper. Firstly, to distinguish the normal region and interference region in target sub-blocks, we introduce a binary support vector, and combine it with the target sparse representation model, after which a particle filtering observation model based on sparse reconstruction error differences between sample targets is developed. Secondly, we utilize saliency extraction to obtain the high frequency area in infrared image, and make it as a priori knowledge of the transition probability model to limit the particle filtering sampling process. Lastly, the tracking result is brought about via target state estimation and the Bayesian posteriori probability calculation. Theoretical analyses and experimental results show that our method can enhance the state estimation ability of stochastic particles, improve the sparse representation adaptabilities for infrared small targets, and optimize the tracking accuracy for infrared small moving targets.

  15. Sample size determination for a t test given a t value from a previous study: A FORTRAN 77 program.

    Science.gov (United States)

    Gillett, R

    2001-11-01

    When uncertain about the magnitude of an effect, researchers commonly substitute in the standard sample-size-determination formula an estimate of effect size derived from a previous experiment. A problem with this approach is that the traditional sample-size-determination formula was not designed to deal with the uncertainty inherent in an effect-size estimate. Consequently, estimate-substitution in the traditional sample-size-determination formula can lead to a substantial loss of power. A method of sample-size determination designed to handle uncertainty in effect-size estimates is described. The procedure uses the t value and sample size from a previous study, which might be a pilot study or a related study in the same area, to establish a distribution of probable effect sizes. The sample size to be employed in the new study is that which supplies an expected power of the desired amount over the distribution of probable effect sizes. A FORTRAN 77 program is presented that permits swift calculation of sample size for a variety of t tests, including independent t tests, related t tests, t tests of correlation coefficients, and t tests of multiple regression b coefficients.

  16. Sample size and power determination when limited preliminary information is available

    Directory of Open Access Journals (Sweden)

    Christine E. McLaren

    2017-04-01

    Full Text Available Abstract Background We describe a novel strategy for power and sample size determination developed for studies utilizing investigational technologies with limited available preliminary data, specifically of imaging biomarkers. We evaluated diffuse optical spectroscopic imaging (DOSI, an experimental noninvasive imaging technique that may be capable of assessing changes in mammographic density. Because there is significant evidence that tamoxifen treatment is more effective at reducing breast cancer risk when accompanied by a reduction of breast density, we designed a study to assess the changes from baseline in DOSI imaging biomarkers that may reflect fluctuations in breast density in premenopausal women receiving tamoxifen. Method While preliminary data demonstrate that DOSI is sensitive to mammographic density in women about to receive neoadjuvant chemotherapy for breast cancer, there is no information on DOSI in tamoxifen treatment. Since the relationship between magnetic resonance imaging (MRI and DOSI has been established in previous studies, we developed a statistical simulation approach utilizing information from an investigation of MRI assessment of breast density in 16 women before and after treatment with tamoxifen to estimate the changes in DOSI biomarkers due to tamoxifen. Results Three sets of 10,000 pairs of MRI breast density data with correlation coefficients of 0.5, 0.8 and 0.9 were simulated and generated and were used to simulate and generate a corresponding 5,000,000 pairs of DOSI values representing water, ctHHB, and lipid. Minimum sample sizes needed per group for specified clinically-relevant effect sizes were obtained. Conclusion The simulation techniques we describe can be applied in studies of other experimental technologies to obtain the important preliminary data to inform the power and sample size calculations.

  17. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    Science.gov (United States)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  18. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Distinct target size of dopamine D-1 and D-2 receptors in rat striatum.

    Science.gov (United States)

    Nielsen, M; Klimek, V; Hyttel, J

    1984-07-16

    Frozen rat striatal tissue was exposed to 10 MeV electrons from a linear accelerator. Based on the theory of target size analysis, the molecular weights of dopamine D-1 receptors (labelled by 3H-piflutixol) and dopamine D-2 receptors (labelled by 3H-spiroperidol) were 79,500 daltons and 136,700 daltons, respectively. The size of the dopamine-stimulated adenylate cyclase was 202,000 daltons. The estimated molecular sizes were deduced by reference to proteins with known molecular weights which were irradiated in parallel. The results showed that the molecular entities for 3H-piflutixol binding and 3H-spiroperidol binding were not identical. The present results do not allow conclusions as to whether D-1 and D-2 receptors are two distinct proteins in the membrane, or whether the receptors are located on the same protein. In the latter case the binding of 3H-spiroperidol needs the presence of a second molecule.

  20. Relative power and sample size analysis on gene expression profiling data

    Directory of Open Access Journals (Sweden)

    den Dunnen JT

    2009-09-01

    Full Text Available Abstract Background With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, preparation and processing. Objective measures that help experimental design, given own pilot data, are thus fundamental. Results Relative power and sample size analysis were performed on two distinct data sets. The first set consisted of Affymetrix array data derived from a nutrigenomics experiment in which weak, intermediate and strong PPARα agonists were administered to wild-type and PPARα-null mice. Our analysis confirms the hierarchy of PPARα-activating compounds previously reported and the general idea that larger effect sizes positively contribute to the average power of the experiment. A simulation experiment was performed that mimicked the effect sizes seen in the first data set. The relative power was predicted but the estimates were slightly conservative. The second, more challenging, data set describes a microarray platform comparison study using hippocampal δC-doublecortin-like kinase transgenic mice that were compared to wild-type mice, which was combined with results from Solexa/Illumina deep sequencing runs. As expected, the choice of technology greatly influences the performance of the experiment. Solexa/Illumina deep sequencing has the highest overall power followed by the microarray platforms Agilent and Affymetrix. Interestingly, Solexa/Illumina deep sequencing displays comparable power across all intensity ranges, in contrast with microarray platforms that have decreased power in the low intensity range due to background noise. This means that deep sequencing technology is especially more powerful in detecting differences in the low intensity range, compared to microarray platforms. Conclusion Power and sample size analysis

  1. Prediction accuracy of a sample-size estimation method for ROC studies.

    Science.gov (United States)

    Chakraborty, Dev P

    2010-05-01

    Sample-size estimation is an important consideration when planning a receiver operating characteristic (ROC) study. The aim of this work was to assess the prediction accuracy of a sample-size estimation method using the Monte Carlo simulation method. Two ROC ratings simulators characterized by low reader and high case variabilities (LH) and high reader and low case variabilities (HL) were used to generate pilot data sets in two modalities. Dorfman-Berbaum-Metz multiple-reader multiple-case (DBM-MRMC) analysis of the ratings yielded estimates of the modality-reader, modality-case, and error variances. These were input to the Hillis-Berbaum (HB) sample-size estimation method, which predicted the number of cases needed to achieve 80% power for 10 readers and an effect size of 0.06 in the pivotal study. Predictions that generalized to readers and cases (random-all), to cases only (random-cases), and to readers only (random-readers) were generated. A prediction-accuracy index defined as the probability that any single prediction yields true power in the 75%-90% range was used to assess the HB method. For random-case generalization, the HB-method prediction-accuracy was reasonable, approximately 50% for five readers and 100 cases in the pilot study. Prediction-accuracy was generally higher under LH conditions than under HL conditions. Under ideal conditions (many readers in the pilot study) the DBM-MRMC-based HB method overestimated the number of cases. The overestimates could be explained by the larger modality-reader variance estimates when reader variability was large (HL). The largest benefit of increasing the number of readers in the pilot study was realized for LH, where 15 readers were enough to yield prediction accuracy >50% under all generalization conditions, but the benefit was lesser for HL where prediction accuracy was approximately 36% for 15 readers under random-all and random-reader conditions. The HB method tends to overestimate the number of cases

  2. Relative power and sample size analysis on gene expression profiling data

    Science.gov (United States)

    van Iterson, M; 't Hoen, PAC; Pedotti, P; Hooiveld, GJEJ; den Dunnen, JT; van Ommen, GJB; Boer, JM; Menezes, RX

    2009-01-01

    Background With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, preparation and processing. Objective measures that help experimental design, given own pilot data, are thus fundamental. Results Relative power and sample size analysis were performed on two distinct data sets. The first set consisted of Affymetrix array data derived from a nutrigenomics experiment in which weak, intermediate and strong PPARα agonists were administered to wild-type and PPARα-null mice. Our analysis confirms the hierarchy of PPARα-activating compounds previously reported and the general idea that larger effect sizes positively contribute to the average power of the experiment. A simulation experiment was performed that mimicked the effect sizes seen in the first data set. The relative power was predicted but the estimates were slightly conservative. The second, more challenging, data set describes a microarray platform comparison study using hippocampal δC-doublecortin-like kinase transgenic mice that were compared to wild-type mice, which was combined with results from Solexa/Illumina deep sequencing runs. As expected, the choice of technology greatly influences the performance of the experiment. Solexa/Illumina deep sequencing has the highest overall power followed by the microarray platforms Agilent and Affymetrix. Interestingly, Solexa/Illumina deep sequencing displays comparable power across all intensity ranges, in contrast with microarray platforms that have decreased power in the low intensity range due to background noise. This means that deep sequencing technology is especially more powerful in detecting differences in the low intensity range, compared to microarray platforms. Conclusion Power and sample size analysis based on pilot data give

  3. Generalized sample size determination formulas for experimental research with hierarchical data.

    Science.gov (United States)

    Usami, Satoshi

    2014-06-01

    Hierarchical data sets arise when the data for lower units (e.g., individuals such as students, clients, and citizens) are nested within higher units (e.g., groups such as classes, hospitals, and regions). In data collection for experimental research, estimating the required sample size beforehand is a fundamental question for obtaining sufficient statistical power and precision of the focused parameters. The present research extends previous research from Heo and Leon (2008) and Usami (2011b), by deriving closed-form formulas for determining the required sample size to test effects in experimental research with hierarchical data, and by focusing on both multisite-randomized trials (MRTs) and cluster-randomized trials (CRTs). These formulas consider both statistical power and the width of the confidence interval of a standardized effect size, on the basis of estimates from a random-intercept model for three-level data that considers both balanced and unbalanced designs. These formulas also address some important results, such as the lower bounds of the needed units at the highest levels.

  4. Back to basics: explaining sample size in outcome trials, are statisticians doing a thorough job?

    Science.gov (United States)

    Carroll, Kevin J

    2009-01-01

    Time to event outcome trials in clinical research are typically large, expensive and high-profile affairs. Such trials are commonplace in oncology and cardiovascular therapeutic areas but are also seen in other areas such as respiratory in indications like chronic obstructive pulmonary disease. Their progress is closely monitored and results are often eagerly awaited. Once available, the top line result is often big news, at least within the therapeutic area in which it was conducted, and the data are subsequently fully scrutinized in a series of high-profile publications. In such circumstances, the statistician has a vital role to play in the design, conduct, analysis and reporting of the trial. In particular, in drug development it is incumbent on the statistician to ensure at the outset that the sizing of the trial is fully appreciated by their medical, and other non-statistical, drug development team colleagues and that the risk of delivering a statistically significant but clinically unpersuasive result is minimized. The statistician also has a key role in advising the team when, early in the life of an outcomes trial, a lower than anticipated event rate appears to be emerging. This paper highlights some of the important features relating to outcome trial sample sizing and makes a number of simple recommendations aimed at ensuring a better, common understanding of the interplay between sample size and power and the final result required to provide a statistically positive and clinically persuasive outcome. Copyright (c) 2009 John Wiley & Sons, Ltd.

  5. Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force

    Energy Technology Data Exchange (ETDEWEB)

    Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R

    2008-05-22

    We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.

  6. Effect of sample size on the fluid flow through a single fractured granitoid

    Directory of Open Access Journals (Sweden)

    Kunal Kumar Singh

    2016-06-01

    Full Text Available Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stress–permeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, σ3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cylindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters, containing a “rough walled single fracture”. These experiments were performed under varied confining pressure (σ3 = 5–40 MPa, fluid pressure (fp ≤ 25 MPa, and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, σeff., and Q decreases with an increase in σeff.. Also, the effects of sample size and fracture roughness do not persist when σeff. ≥ 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory

  7. Sample size and repeated measures required in studies of foods in the homes of African-American families.

    Science.gov (United States)

    Stevens, June; Bryant, Maria; Wang, Chin-Hua; Cai, Jianwen; Bentley, Margaret E

    2012-06-01

    Measurement of the home food environment is of interest to researchers because it affects food intake and is a feasible target for nutrition interventions. The objective of this study was to provide estimates to aid the calculation of sample size and number of repeated measures needed in studies of nutrients and foods in the home. We inventoried all foods in the homes of 80 African-American first-time mothers and determined 6 nutrient-related attributes. Sixty-three households were measured 3 times, 11 were measured twice, and 6 were measured once, producing 217 inventories collected at ~2-mo intervals. Following log transformations, number of foods, total energy, dietary fiber, and fat required only one measurement per household to achieve a correlation of 0.8 between the observed and true values. For percent energy from fat and energy density, 3 and 2 repeated measurements, respectively, were needed to achieve a correlation of 0.8. A sample size of 252 was needed to detect a difference of 25% of an SD in total energy with one measurement compared with 213 with 3 repeated measurements. Macronutrient characteristics of household foods appeared relatively stable over a 6-mo period and only 1 or 2 repeated measures of households may be sufficient for an efficient study design.

  8. Reference calculation of light propagation between parallel planes of different sizes and sampling rates.

    Science.gov (United States)

    Lobaz, Petr

    2011-01-03

    The article deals with a method of calculation of off-axis light propagation between parallel planes using discretization of the Rayleigh-Sommerfeld integral and its implementation by fast convolution. It analyses zero-padding in case of different plane sizes. In case of memory restrictions, it suggests splitting the calculation into tiles and shows that splitting leads to a faster calculation when plane sizes are a lot different. Next, it suggests how to calculate propagation in case of different sampling rates by splitting planes into interleaved tiles and shows this to be faster than zero-padding and direct calculation. Neither the speedup nor memory-saving method decreases accuracy; the aim of the proposed method is to provide reference data that can be compared to the results of faster and less precise methods.

  9. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r* ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  10. Sample-size calculations for multi-group comparison in population pharmacokinetic experiments.

    Science.gov (United States)

    Ogungbenro, Kayode; Aarons, Leon

    2010-01-01

    This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi-group comparison detecting the difference in parameters between groups under mixed-effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.

  11. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  12. Distance software: design and analysis of distance sampling surveys for estimating population size.

    Science.gov (United States)

    Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon Rb; Marques, Tiago A; Burnham, Kenneth P

    2010-02-01

    1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance.2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use.3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated.4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark-recapture distance sampling, which relaxes the assumption of certain detection at zero distance.5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap.6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software.7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods

  13. Dealing with varying detection probability, unequal sample sizes and clumped distributions in count data.

    Directory of Open Access Journals (Sweden)

    D Johan Kotze

    Full Text Available Temporal variation in the detectability of a species can bias estimates of relative abundance if not handled correctly. For example, when effort varies in space and/or time it becomes necessary to take variation in detectability into account when data are analyzed. We demonstrate the importance of incorporating seasonality into the analysis of data with unequal sample sizes due to lost traps at a particular density of a species. A case study of count data was simulated using a spring-active carabid beetle. Traps were 'lost' randomly during high beetle activity in high abundance sites and during low beetle activity in low abundance sites. Five different models were fitted to datasets with different levels of loss. If sample sizes were unequal and a seasonality variable was not included in models that assumed the number of individuals was log-normally distributed, the models severely under- or overestimated the true effect size. Results did not improve when seasonality and number of trapping days were included in these models as offset terms, but only performed well when the response variable was specified as following a negative binomial distribution. Finally, if seasonal variation of a species is unknown, which is often the case, seasonality can be added as a free factor, resulting in well-performing negative binomial models. Based on these results we recommend (a add sampling effort (number of trapping days in our example to the models as an offset term, (b if precise information is available on seasonal variation in detectability of a study object, add seasonality to the models as an offset term; (c if information on seasonal variation in detectability is inadequate, add seasonality as a free factor; and (d specify the response variable of count data as following a negative binomial or over-dispersed Poisson distribution.

  14. High-dimensional, massive sample-size Cox proportional hazards regression for survival analysis.

    Science.gov (United States)

    Mittal, Sushil; Madigan, David; Burd, Randall S; Suchard, Marc A

    2014-04-01

    Survival analysis endures as an old, yet active research field with applications that spread across many domains. Continuing improvements in data acquisition techniques pose constant challenges in applying existing survival analysis methods to these emerging data sets. In this paper, we present tools for fitting regularized Cox survival analysis models on high-dimensional, massive sample-size (HDMSS) data using a variant of the cyclic coordinate descent optimization technique tailored for the sparsity that HDMSS data often present. Experiments on two real data examples demonstrate that efficient analyses of HDMSS data using these tools result in improved predictive performance and calibration.

  15. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

    Energy Technology Data Exchange (ETDEWEB)

    Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)

    2010-01-15

    In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

  16. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark

    National Research Council Canada - National Science Library

    Adrian Sayers; Michael J Crowther; Andrew Judge; Michael R Whitehouse; Ashley W Blom

    2017-01-01

    ... to the performance benchmark of interest. We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking...

  17. Scrapie prion liposomes and rods exhibit target sizes of 55,000 Da

    Energy Technology Data Exchange (ETDEWEB)

    Bellinger-Kawahara, C.G.; Kempner, E.; Groth, D.; Gabizon, R.; Prusiner, S.B.

    1988-06-01

    Scrapie is a degenerative neurologic disease in sheep and goats which can be experimentally transmitted to laboratory rodents. Considerable evidence suggests that the scrapie agent is composed largely, if not entirely, of an abnormal isoform of the prion protein (PrPSc). Inactivation of scrapie prions by ionizing radiation exhibited single-hit kinetics and gave a target size of 55,000 +/- 9000 mol wt. The inactivation profile was independent of the form of the prion. Scrapie agent infectivity in brain homogenates, microsomal fractions, detergent-extracted microsomes, purified amyloid rods, and liposomes exhibited the same inactivation profile. Our data are consistent with the hypothesis that the infectious particle causing scrapie contains approximately 2 PrPSc molecules.

  18. Power and sample size calculation for paired recurrent events data based on robust nonparametric tests.

    Science.gov (United States)

    Su, Pei-Fang; Chung, Chia-Hua; Wang, Yu-Wen; Chi, Yunchan; Chang, Ying-Ju

    2017-05-20

    The purpose of this paper is to develop a formula for calculating the required sample size for paired recurrent events data. The developed formula is based on robust non-parametric tests for comparing the marginal mean function of events between paired samples. This calculation can accommodate the associations among a sequence of paired recurrent event times with a specification of correlated gamma frailty variables for a proportional intensity model. We evaluate the performance of the proposed method with comprehensive simulations including the impacts of paired correlations, homogeneous or nonhomogeneous processes, marginal hazard rates, censoring rate, accrual and follow-up times, as well as the sensitivity analysis for the assumption of the frailty distribution. The use of the formula is also demonstrated using a premature infant study from the neonatal intensive care unit of a tertiary center in southern Taiwan. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Sample size for regression analyses of theory of planned behaviour studies: case of prescribing in general practice.

    Science.gov (United States)

    Rashidian, Arash; Miles, Jeremy; Russell, Daphne; Russell, Ian

    2006-11-01

    Interest has been growing in the use of the theory of planned behaviour (TBP) in health services research. The sample sizes range from less than 50 to more than 750 in published TPB studies without sample size calculations. We estimate the sample size for a multi-stage random survey of prescribing intention and actual prescribing for asthma in British general practice. To our knowledge, this is the first systematic attempt to determine sample size for a TPB survey. We use two different approaches: reported values of regression models' goodness-of-fit (the lambda method) and zero-order correlations (the variance inflation factor or VIF method). Intra-cluster correlation coefficient (ICC) is estimated and a socioeconomic variable is used for stratification. We perform sensitivity analysis to estimate the effects of our decisions on final sample size. The VIF method is more sensitive to the requirements of a TPB study. Given a correlation of .25 between intention and behaviour, and of .4 between intention and perceived behavioural control, the proposed sample size is 148. We estimate the ICC for asthma prescribing to be around 0.07. If 10 general practitioners were sampled per cluster, the sample size would be 242. It is feasible to perform sophisticated sample size calculations for a TPB study. The VIF is the appropriate method. Our approach can be used with adjustments in other settings and for other regression models.

  20. Target Tracking of a Linear Time Invariant System under Irregular Sampling

    Directory of Open Access Journals (Sweden)

    Jin Xue-Bo

    2012-11-01

    Full Text Available Due to event-triggered sampling in a system, or maybe with the aim of reducing data storage, tracking many applications will encounter irregular sampling time. By calculating the matrix exponential using an inverse Laplace transform, this paper transforms the irregular sampling tracking problem to the problem of tracking with time-varying parameters of a system. Using the common Kalman filter, the developed method is used to track a target for the simulated trajectory and video tracking. The results of simulation experiments have shown that it can obtain good estimation performance even at a very high irregular rate of measurement sampling time.

  1. Current practice in methodology and reporting of the sample size calculation in randomised trials of hip and knee osteoarthritis: a protocol for a systematic review.

    Science.gov (United States)

    Copsey, Bethan; Dutton, Susan; Fitzpatrick, Ray; Lamb, Sarah E; Cook, Jonathan A

    2017-10-10

    A key aspect of the design of randomised controlled trials (RCTs) is determining the sample size. It is important that the trial sample size is appropriately calculated. The required sample size will differ by clinical area, for instance, due to the prevalence of the condition and the choice of primary outcome. Additionally, it will depend upon the choice of target difference assumed in the calculation. Focussing upon the hip and knee osteoarthritis population, this study aims to systematically review how the trial size was determined for trials of osteoarthritis, on what basis, and how well these aspects are reported. Several electronic databases (Medline, Cochrane library, CINAHL, EMBASE, PsycINFO, PEDro and AMED) will be searched to identify articles on RCTs of hip and knee osteoarthritis published in 2016. Articles will be screened for eligibility and data extracted independently by two reviewers. Data will be extracted on study characteristics (design, population, intervention and control treatments), primary outcome, chosen sample size and justification, parameters used to calculate the sample size (including treatment effect in control arm, level of variability in primary outcome, loss to follow-up rates). Data will be summarised across the studies using appropriate summary statistics (e.g. n and %, median and interquartile range). The proportion of studies which report each key component of the sample size calculation will be presented. The reproducibility of the sample size calculation will be tested. The findings of this systematic review will summarise the current practice for sample size calculation in trials of hip and knee osteoarthritis. It will also provide evidence on the completeness of the reporting of the sample size calculation, reproducibility of the chosen sample size and the basis for the values used in the calculation. As this review was not eligible to be registered on PROSPERO, the summary information was uploaded to Figshare to make it

  2. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  3. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2017-09-27

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Strategies for informed sample size reduction in adaptive controlled clinical trials

    Science.gov (United States)

    Arandjelović, Ognjen

    2017-12-01

    Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.

  5. Effects of 'target' plant species body size on neighbourhood species richness and composition in old-field vegetation.

    Directory of Open Access Journals (Sweden)

    Brandon S Schamp

    Full Text Available Competition is generally regarded as an important force in organizing the structure of vegetation, and evidence from several experimental studies of species mixtures suggests that larger mature plant size elicits a competitive advantage. However, these findings are at odds with the fact that large and small plant species generally coexist, and relatively smaller species are more common in virtually all plant communities. Here, we use replicates of ten relatively large old-field plant species to explore the competitive impact of target individual size on their surrounding neighbourhoods compared to nearby neighbourhoods of the same size that are not centred by a large target individual. While target individuals of the largest of our test species, Centaurea jacea L., had a strong impact on neighbouring species, in general, target species size was a weak predictor of the number of other resident species growing within its immediate neighbourhood, as well as the number of resident species that were reproductive. Thus, the presence of a large competitor did not restrict the ability of neighbouring species to reproduce. Lastly, target species size did not have any impact on the species size structure of neighbouring species; i.e. they did not restrict smaller, supposedly poorer competitors, from growing and reproducing close by. Taken together, these results provide no support for a size-advantage in competition restricting local species richness or the ability of small species to coexist and successfully reproduce in the immediate neighbourhood of a large species.

  6. Sample size calculations for micro-randomized trials in mHealth.

    Science.gov (United States)

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A

    2016-05-30

    The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  7. On tests of treatment-covariate interactions: An illustration of appropriate power and sample size calculations.

    Science.gov (United States)

    Shieh, Gwowen

    2017-01-01

    The appraisals of treatment-covariate interaction have theoretical and substantial implications in all scientific fields. Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA. A fundamental assumption of ANCOVA is that the regression slopes associating the response variable with the covariate variable are presumed constant across treatment groups. The validity of homogeneous regression slopes accordingly is the most essential concern in traditional ANCOVA and inevitably determines the practical usefulness of research findings. In view of the limited results in current literature, this article aims to present power and sample size procedures for tests of heterogeneity between two regression slopes with particular emphasis on the stochastic feature of covariate variables. Theoretical implications and numerical investigations are presented to explicate the utility and advantage for accommodating covariate properties. The exact approach has the distinct feature of accommodating the full distributional properties of normal covariates whereas the simplified approximate methods only utilize the partial information of covariate variances. According to the overall accuracy and robustness, the exact approach is recommended over the approximate methods as a reliable tool in practical applications. The suggested power and sample size calculations can be implemented with the supplemental SAS and R programs.

  8. On tests of treatment-covariate interactions: An illustration of appropriate power and sample size calculations.

    Directory of Open Access Journals (Sweden)

    Gwowen Shieh

    Full Text Available The appraisals of treatment-covariate interaction have theoretical and substantial implications in all scientific fields. Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA. A fundamental assumption of ANCOVA is that the regression slopes associating the response variable with the covariate variable are presumed constant across treatment groups. The validity of homogeneous regression slopes accordingly is the most essential concern in traditional ANCOVA and inevitably determines the practical usefulness of research findings. In view of the limited results in current literature, this article aims to present power and sample size procedures for tests of heterogeneity between two regression slopes with particular emphasis on the stochastic feature of covariate variables. Theoretical implications and numerical investigations are presented to explicate the utility and advantage for accommodating covariate properties. The exact approach has the distinct feature of accommodating the full distributional properties of normal covariates whereas the simplified approximate methods only utilize the partial information of covariate variances. According to the overall accuracy and robustness, the exact approach is recommended over the approximate methods as a reliable tool in practical applications. The suggested power and sample size calculations can be implemented with the supplemental SAS and R programs.

  9. Sample Size Considerations of Prediction-Validation Methods in High-Dimensional Data for Survival Outcomes

    Science.gov (United States)

    Pang, Herbert; Jung, Sin-Ho

    2013-01-01

    A variety of prediction methods are used to relate high-dimensional genome data with a clinical outcome using a prediction model. Once a prediction model is developed from a data set, it should be validated using a resampling method or an independent data set. Although the existing prediction methods have been intensively evaluated by many investigators, there has not been a comprehensive study investigating the performance of the validation methods, especially with a survival clinical outcome. Understanding the properties of the various validation methods can allow researchers to perform more powerful validations while controlling for type I error. In addition, sample size calculation strategy based on these validation methods is lacking. We conduct extensive simulations to examine the statistical properties of these validation strategies. In both simulations and a real data example, we have found that 10-fold cross-validation with permutation gave the best power while controlling type I error close to the nominal level. Based on this, we have also developed a sample size calculation method that will be used to design a validation study with a user-chosen combination of prediction. Microarray and genome-wide association studies data are used as illustrations. The power calculation method in this presentation can be used for the design of any biomedical studies involving high-dimensional data and survival outcomes. PMID:23471879

  10. A comparison of different estimation methods for simulation-based sample size determination in longitudinal studies

    Science.gov (United States)

    Bahçecitapar, Melike Kaya

    2017-07-01

    Determining sample size necessary for correct results is a crucial step in the design of longitudinal studies. Simulation-based statistical power calculation is a flexible approach to determine number of subjects and repeated measures of longitudinal studies especially in complex design. Several papers have provided sample size/statistical power calculations for longitudinal studies incorporating data analysis by linear mixed effects models (LMMs). In this study, different estimation methods (methods based on maximum likelihood (ML) and restricted ML) with different iterative algorithms (quasi-Newton and ridge-stabilized Newton-Raphson) in fitting LMMs to generated longitudinal data for simulation-based power calculation are compared. This study examines statistical power of F-test statistics for parameter representing difference in responses over time from two treatment groups in the LMM with a longitudinal covariate. The most common procedures in SAS, such as PROC GLIMMIX using quasi-Newton algorithm and PROC MIXED using ridge-stabilized algorithm are used for analyzing generated longitudinal data in simulation. It is seen that both procedures present similar results. Moreover, it is found that the magnitude of the parameter of interest in the model for simulations affect statistical power calculations in both procedures substantially.

  11. On-target sample preparation of 4-sulfophenyl isothiocyanate-derivatized peptides using AnchorChip Targets

    DEFF Research Database (Denmark)

    Zhang, Xumin; Rogowska-Wrzesinska, Adelina; Roepstorff, Peter

    2008-01-01

    De novo sequencing of tryptic peptides by post source decay (PSD) or collision induced dissociation (CID) analysis using MALDI TOF-TOF instruments is due to the easy interpretation facilitated by the introduction of N-terminal sulfonated derivatives. Recently, a stable and cheap reagent, 4......-sulfophenyl isothiocyanate (SPITC), has been successfully used for N-terminal derivatization. Previously described methods have always used desalting and concentration by reverse-phase chromatography prior to mass spectrometric analysis. Here we present an on-target sample preparation method based on Anchor...

  12. Relationship between the size of the samples and the interpretation of the mercury intrusion results of an artificial sandstone

    NARCIS (Netherlands)

    Dong, H.; Zhang, H.; Zuo, Y.; Gao, P.; Ye, G.

    2018-01-01

    Mercury intrusion porosimetry (MIP) measurements are widely used to determine pore throat size distribution (PSD) curves of porous materials. The pore throat size of porous materials has been used to estimate their compressive strength and air permeability. However, the effect of sample size on

  13. Sampling surface particle size distributions and stability analysis of deep channel in the Pearl River Estuary

    Science.gov (United States)

    Feng, Hao-chuan; Zhang, Wei; Zhu, Yu-liang; Lei, Zhi-yi; Ji, Xiao-mei

    2017-06-01

    Particle size distributions (PSDs) of bottom sediments in a coastal zone are generally multimodal due to the complexity of the dynamic environment. In this paper, bottom sediments along the deep channel of the Pearl River Estuary (PRE) are used to understand the multimodal PSDs' characteristics and the corresponding depositional environment. The results of curve-fitting analysis indicate that the near-bottom sediments in the deep channel generally have a bimodal distribution with a fine component and a relatively coarse component. The particle size distribution of bimodal sediment samples can be expressed as the sum of two lognormal functions and the parameters for each component can be determined. At each station of the PRE, the fine component makes up less volume of the sediments and is relatively poorly sorted. The relatively coarse component, which is the major component of the sediments, is even more poorly sorted. The interrelations between the dynamics and particle size of the bottom sediment in the deep channel of the PRE have also been investigated by the field measurement and simulated data. The critical shear velocity and the shear velocity are calculated to study the stability of the deep channel. The results indicate that the critical shear velocity has a similar distribution over large part of the deep channel due to the similar particle size distribution of sediments. Based on a comparison between the critical shear velocities derived from sedimentary parameters and the shear velocities obtained by tidal currents, it is likely that the depositional area is mainly distributed in the northern part of the channel, while the southern part of the deep channel has to face higher erosion risk.

  14. Platelet function investigation by flow cytometry: Sample volume, needle size, and reference intervals.

    Science.gov (United States)

    Pedersen, Oliver Heidmann; Nissen, Peter H; Hvas, Anne-Mette

    2017-09-29

    Flow cytometry is an increasingly used method for platelet function analysis because it has some important advantages compared with other platelet function tests. Flow cytometric platelet function analyses only require a small sample volume (3.5 mL); however, to expand the field of applications, e.g., for platelet function analysis in children, even smaller volumes are needed. Platelets are easily activated, and the size of the needle for blood sampling might be of importance for the pre-activation of the platelets. Moreover, to use flow cytometry for investigation of platelet function in clinical practice, a reference interval is warranted. The aims of this work were 1) to determine if small volumes of whole blood can be used without influencing the results, 2) to examine the pre-activation of platelets with respect to needle size, and 3) to establish reference intervals for flow cytometric platelet function assays. To examine the influence of sample volume, blood was collected from 20 healthy individuals in 1.0 mL, 1.8 mL, and 3.5 mL tubes. To examine the influence of the needle size on pre-activation, blood was drawn from another 13 healthy individuals with both a 19- and 21-gauge needle. For the reference interval study, 78 healthy adults were included. The flow cytometric analyses were performed on a NAVIOS flow cytometer (Beckman Coulter, Miami, Florida) investigating the following activation-dependent markers on the platelet surface; bound-fibrinogen, CD63, and P-selectin (CD62p) after activation with arachidonic acid, ristocetin, adenosine diphosphate, thrombin-receptor-activating-peptide, and collagen. The study showed that a blood volume as low as 1.0 mL can be used for platelet function analysis by flow cytometry and that both a 19- and 21-gauge needle can be used for blood sampling. In addition, reference intervals for platelet function analyses by flow cytometry were established.

  15. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  16. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  17. The Procalcitonin And Survival Study (PASS – A Randomised multi-center investigator-initiated trial to investigate whether daily measurements biomarker Procalcitonin and pro-active diagnostic and therapeutic responses to abnormal Procalcitonin levels, can improve survival in intensive care unit patients. Calculated sample size (target population: 1000 patients

    Directory of Open Access Journals (Sweden)

    Fjeldborg Paul

    2008-07-01

    Full Text Available Abstract Background Sepsis and complications to sepsis are major causes of mortality in critically ill patients. Rapid treatment of sepsis is of crucial importance for survival of patients. The infectious status of the critically ill patient is often difficult to assess because symptoms cannot be expressed and signs may present atypically. The established biological markers of inflammation (leucocytes, C-reactive protein may often be influenced by other parameters than infection, and may be unacceptably slowly released after progression of an infection. At the same time, lack of a relevant antimicrobial therapy in an early course of infection may be fatal for the patient. Specific and rapid markers of bacterial infection have been sought for use in these patients. Methods Multi-centre randomized controlled interventional trial. Powered for superiority and non-inferiority on all measured end points. Complies with, "Good Clinical Practice" (ICH-GCP Guideline (CPMP/ICH/135/95, Directive 2001/20/EC. Inclusion: 1 Age ≥ 18 years of age, 2 Admitted to the participating intensive care units, 3 Signed written informed consent. Exclusion: 1 Known hyper-bilirubinaemia. or hypertriglyceridaemia, 2 Likely that safety is compromised by blood sampling, 3 Pregnant or breast feeding. Computerized Randomisation: Two arms (1:1, n = 500 per arm: Arm 1: standard of care. Arm 2: standard of care and Procalcitonin guided diagnostics and treatment of infection. Primary Trial Objective: To address whether daily Procalcitonin measurements and immediate diagnostic and therapeutic response on day-to-day changes in procalcitonin can reduce the mortality of critically ill patients. Discussion For the first time ever, a mortality-endpoint, large scale randomized controlled trial with a biomarker-guided strategy compared to the best standard of care, is conducted in an Intensive care setting. Results will, with a high statistical power answer the question: Can the survival

  18. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  19. Size-fractionated measurement of coarse black carbon particles in deposition samples

    Science.gov (United States)

    Schultz, E.

    In a 1-year field study, particle deposition flux was measured by transparent collection plates. Particle concentration was simultaneously measured with a cascade impactor. Microscopic evaluation of deposition samples provided the discrimination of translucent (mineral or biological) and black carbon particles, i.e. soot agglomerates, fly-ash cenospheres and rubber fragments in the size range from 3 to 50 μm. The deposition samples were collected in two different sampling devices. A wind- and rain-shielded measurement was achieved in the Sigma-2 device. Dry deposition data from this device were used to calculate mass concentrations of the translucent and the black particle fraction separately, approximating particle deposition velocity by Stokes' settling velocity. In mass calculations an error up to 20% has to be considered due to assumed spherical shape and unit density for all particles. Within the limitations of these assumptions, deposition velocities of the distinguished coarse particles were calculated. The results for total particulate matter in this range are in good agreement with those from impactor measurement. The coarse black carbon fraction shows a reduced deposition velocity in comparison with translucent particles. The deviation depends on precipitation amount. Further measurements and structural investigations of black carbon particles are in preparation to verify these results.

  20. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  1. Mixed modeling and sample size calculations for identifying housekeeping genes.

    Science.gov (United States)

    Dai, Hongying; Charnigo, Richard; Vyhlidal, Carrie A; Jones, Bridgette L; Bhandary, Madhusudan

    2013-08-15

    Normalization of gene expression data using internal control genes that have biologically stable expression levels is an important process for analyzing reverse transcription polymerase chain reaction data. We propose a three-way linear mixed-effects model to select optimal housekeeping genes. The mixed-effects model can accommodate multiple continuous and/or categorical variables with sample random effects, gene fixed effects, systematic effects, and gene by systematic effect interactions. We propose using the intraclass correlation coefficient among gene expression levels as the stability measure to select housekeeping genes that have low within-sample variation. Global hypothesis testing is proposed to ensure that selected housekeeping genes are free of systematic effects or gene by systematic effect interactions. A gene combination with the highest lower bound of 95% confidence interval for intraclass correlation coefficient and no significant systematic effects is selected for normalization. Sample size calculation based on the estimation accuracy of the stability measure is offered to help practitioners design experiments to identify housekeeping genes. We compare our methods with geNorm and NormFinder by using three case studies. A free software package written in SAS (Cary, NC, U.S.A.) is available at http://d.web.umkc.edu/daih under software tab. Copyright © 2013 John Wiley & Sons, Ltd.

  2. In situ assembly states of (Na+,K+)-pump ATPase in human erythrocytes. Radiation target size analyses.

    Science.gov (United States)

    Hah, J; Goldinger, J M; Jung, C Y

    1985-11-15

    The in situ assembly state of the (Na+,K+)-pump ATPase of human erythrocytes was studied by applying the classical target theory to radiation inactivation data of the ouabain-sensitive sodium efflux and ATP hydrolysis. Erythrocytes and their extensively washed white ghosts were irradiated at -45 to -50 degrees C with an increasing dose of 1.5-MeV electron beam, and after thawing, the Na+-pump flux and/or enzyme activities were assayed. Each activity measured was reduced as a simple exponential function of radiation dose, from which a radiation sensitive mass (target size) was calculated. When intact cells were used, the target sizes for the pump and for the ATPase activities were equal and approximately 620,000 daltons. The target size for the ATPase activity was reduced to approximately 320,000 daltons if the cells were pretreated with digitoxigenin. When ghosts were used, the target size for the ATPase activity was again approximately 320,000 daltons. Our target size measurements together with other information available in literature suggest that (Na+,K+)-pump ATPase may exist in human erythrocytes either as a tetramer of alpha beta or as a dimer of alpha beta in tight association with other protein mass, probably certain glycolytic enzymes, and that this tetrameric or heterocomplex association is dissociable by digitoxigenin treatment or by extensive wash during ghost preparation.

  3. What about N? A methodological study of sample-size reporting in focus group studies

    Science.gov (United States)

    2011-01-01

    Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and

  4. What about N? A methodological study of sample-size reporting in focus group studies.

    Science.gov (United States)

    Carlsen, Benedicte; Glenton, Claire

    2011-03-11

    Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these

  5. What about N? A methodological study of sample-size reporting in focus group studies

    Directory of Open Access Journals (Sweden)

    Glenton Claire

    2011-03-01

    Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method

  6. Determination of reference limits: statistical concepts and tools for sample size calculation.

    Science.gov (United States)

    Wellek, Stefan; Lackner, Karl J; Jennen-Steinmetz, Christine; Reinhard, Iris; Hoffmann, Isabell; Blettner, Maria

    2014-12-01

    Reference limits are estimators for 'extreme' percentiles of the distribution of a quantitative diagnostic marker in the healthy population. In most cases, interest will be in the 90% or 95% reference intervals. The standard parametric method of determining reference limits consists of computing quantities of the form X̅±c·S. The proportion of covered values in the underlying population coincides with the specificity obtained when a measurement value falling outside the corresponding reference region is classified as diagnostically suspect. Nonparametrically, reference limits are estimated by means of so-called order statistics. In both approaches, the precision of the estimate depends on the sample size. We present computational procedures for calculating minimally required numbers of subjects to be enrolled in a reference study. The much more sophisticated concept of reference bands replacing statistical reference intervals in case of age-dependent diagnostic markers is also discussed.

  7. Realistic weight perception and body size assessment in a racially diverse community sample of dieters.

    Science.gov (United States)

    Cachelin, F M; Striegel-Moore, R H; Elder, K A

    1998-01-01

    Recently, a shift in obesity treatment away from emphasizing ideal weight loss goals to establishing realistic weight loss goals has been proposed; yet, what constitutes "realistic" weight loss for different populations is not clear. This study examined notions of realistic shape and weight as well as body size assessment in a large community-based sample of African-American, Asian, Hispanic, and white men and women. Participants were 1893 survey respondents who were all dieters and primarily overweight. Groups were compared on various variables of body image assessment using silhouette ratings. No significant race differences were found in silhouette ratings, nor in perceptions of realistic shape or reasonable weight loss. Realistic shape and weight ratings by both women and men were smaller than current shape and weight but larger than ideal shape and weight ratings. Compared with male dieters, female dieters considered greater weight loss to be realistic. Implications of the findings for the treatment of obesity are discussed.

  8. Some basic aspects of statistical methods and sample size determination in health science research.

    Science.gov (United States)

    Binu, V S; Mayya, Shreemathi S; Dhar, Murali

    2014-04-01

    A health science researcher may sometimes wonder "why statistical methods are so important in research?" Simple answer is that, statistical methods are used throughout a study that includes planning, designing, collecting data, analyzing and drawing meaningful interpretation and report the findings. Hence, it is important that a researcher knows the concepts of at least basic statistical methods used at various stages of a research study. This helps the researcher in the conduct of an appropriately well-designed study leading to valid and reliable results that can be generalized to the population. A well-designed study possesses fewer biases, which intern gives precise, valid and reliable results. There are many statistical methods and tests that are used at various stages of a research. In this communication, we discuss the overall importance of statistical considerations in medical research with the main emphasis on estimating minimum sample size for different study objectives.

  9. Sample size and power for a stratified doubly randomized preference design.

    Science.gov (United States)

    Cameron, Briana; Esserman, Denise A

    2016-11-21

    The two-stage (or doubly) randomized preference trial design is an important tool for researchers seeking to disentangle the role of patient treatment preference on treatment response through estimation of selection and preference effects. Up until now, these designs have been limited by their assumption of equal preference rates and effect sizes across the entire study population. We propose a stratified two-stage randomized trial design that addresses this limitation. We begin by deriving stratified test statistics for the treatment, preference, and selection effects. Next, we develop a sample size formula for the number of patients required to detect each effect. The properties of the model and the efficiency of the design are established using a series of simulation studies. We demonstrate the applicability of the design using a study of Hepatitis C treatment modality, specialty clinic versus mobile medical clinic. In this example, a stratified preference design (stratified by alcohol/drug use) may more closely capture the true distribution of patient preferences and allow for a more efficient design than a design which ignores these differences (unstratified version). © The Author(s) 2016.

  10. Estimating effective population size from temporally spaced samples with a novel, efficient maximum-likelihood algorithm.

    Science.gov (United States)

    Hui, Tin-Yu J; Burt, Austin

    2015-05-01

    The effective population size [Formula: see text] is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating [Formula: see text] have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator [Formula: see text] for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying [Formula: see text] is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator [Formula: see text], and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of [Formula: see text] to several million, hence allowing the estimation of larger [Formula: see text]. Finally, we demonstrate how this algorithm can cope with nonconstant [Formula: see text] scenarios and be used as a likelihood-ratio test to test for the equality of [Formula: see text] throughout the sampling horizon. An R package "NB" is now available for download to implement the method described in this article. Copyright © 2015 by the Genetics Society of America.

  11. Sediment grain size estimation using airborne remote sensing, field sampling, and robust statistic.

    Science.gov (United States)

    Castillo, Elena; Pereda, Raúl; Luis, Julio Manuel de; Medina, Raúl; Viguri, Javier

    2011-10-01

    Remote sensing has been used since the 1980s to study parameters in relation with coastal zones. It was not until the beginning of the twenty-first century that it started to acquire imagery with good temporal and spectral resolution. This has encouraged the development of reliable imagery acquisition systems that consider remote sensing as a water management tool. Nevertheless, the spatial resolution that it provides is not adapted to carry out coastal studies. This article introduces a new methodology for estimating the most fundamental physical property of intertidal sediment, the grain size, in coastal zones. The study combines hyperspectral information (CASI-2 flight), robust statistic, and simultaneous field work (chemical and radiometric sampling), performed over Santander Bay, Spain. Field data acquisition was used to build a spectral library in order to study different atmospheric correction algorithms for CASI-2 data and to develop algorithms to estimate grain size in an estuary. Two robust estimation techniques (MVE and MCD multivariate M-estimators of location and scale) were applied to CASI-2 imagery, and the results showed that robust adjustments give acceptable and meaningful algorithms. These adjustments have given the following R(2) estimated results: 0.93 in the case of sandy loam contribution, 0.94 for the silty loam, and 0.67 for clay loam. The robust statistic is a powerful tool for large dataset.

  12. Microscale sample deposition onto hydrophobic target plates for trace level detection of neuropeptides in brain tissue by MALDI-MS.

    Science.gov (United States)

    Wei, Hui; Dean, Stacey L; Parkin, Mark C; Nolkrantz, Kerstin; O'Callaghan, James P; Kennedy, Robert T

    2005-10-01

    A sample preparation method that combines a modified target plate with a nanoscale reversed-phase column (nanocolumn) was developed for detection of neuropeptides by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). A gold-coated MALDI plate was modified with an octadecanethiol (ODT) self-assembled monolayer to create a hydrophobic surface that could concentrate peptide samples into a approximately 200-500-microm diameter spot. The spot sizes generated were comparable to those obtained for a substrate patterned with 200-microm hydrophilic spots on a hydrophobic substrate. The sample spots on the ODT-coated plate were 100-fold smaller than those formed on an unmodified gold plate with a 1-microl sample and generated 10 to 50 times higher mass sensitivity for peptide standards by MALDI-TOF MS. When the sample was deposited on an ODT-modified plate from a nanocolumn, the detection limit for peptides was as low as 20 pM for 5-microl samples corresponding to 80 amol deposited. This technique was used to analyze extracts of microwave-fixed tissue from rat brain striatum. Ninety-eight putative peptides were detected including several that had masses matching neuropeptides expected in this brain region such as substance P, rimorphin, and neurotensin. Twenty-three peptides had masses that matched peaks detected by capillary liquid chromatography with electrospray ionization MS. Copyright (c) 2005 John Wiley & Sons, Ltd.

  13. Sample size calculation based on exact test for assessing differential expression analysis in RNA-seq data.

    Science.gov (United States)

    Li, Chung-I; Su, Pei-Fang; Shyr, Yu

    2013-12-06

    Sample size calculation is an important issue in the experimental design of biomedical research. For RNA-seq experiments, the sample size calculation method based on the Poisson model has been proposed; however, when there are biological replicates, RNA-seq data could exhibit variation significantly greater than the mean (i.e. over-dispersion). The Poisson model cannot appropriately model the over-dispersion, and in such cases, the negative binomial model has been used as a natural extension of the Poisson model. Because the field currently lacks a sample size calculation method based on the negative binomial model for assessing differential expression analysis of RNA-seq data, we propose a method to calculate the sample size. We propose a sample size calculation method based on the exact test for assessing differential expression analysis of RNA-seq data. The proposed sample size calculation method is straightforward and not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size method are presented; the results indicate our method works well, with achievement of desired power.

  14. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  15. SAMPLE SIZE DETERMINATION IN CLINICAL TRIALS BASED ON APPROXIMATION OF VARIANCE ESTIMATED FROM LIMITED PRIMARY OR PILOT STUDIES

    Directory of Open Access Journals (Sweden)

    B SOLEYMANI

    2001-06-01

    Full Text Available In many casses the estimation of variance which is used to determine sample size in clinical trials, derives from limited primary or pilot studies in which number of samples is small. since in such casses the estimation of variance may be much far from the real variance, the size of samples is suspected to be less or more than what is really needed. In this article an attempt has been made to give a solution to this problem. in the case of normal distribution. Based on distribution of (n-1 S2/?2 which is chi-square for normal variables, an appropriate estimation of variance is determined an used to calculate sample size. Also, total probability to ensure specific precision and power has been achived. In method presented here, The probability for getting desired precision and power is more than that of usual method, but results of two methods get closer when sample size increases in primary studies.

  16. Large-scale targeted metagenomics analysis of bacterial ecological changes in 88 kimchi samples during fermentation.

    Science.gov (United States)

    Lee, Moeun; Song, Jung Hee; Jung, Min Young; Lee, Se Hee; Chang, Ji Yoon

    2017-09-01

    The microbial communities in kimchi vary widely, but the precise effects of differences in region of origin, ingredients, and preparation method on the microbiota are unclear. We analyzed the bacterial community composition of household (n = 69) and commercial (n = 19) kimchi samples obtained from six Korean provinces between April and August 2015. Samples were analyzed by barcoded pyrosequencing targeting the V1-V3 region of the 16S ribosomal RNA gene. The initial pH of the kimchi samples was 5.00-6.39, and the salt concentration was 1.72-4.42%. Except for sampling locality, all categorical variables, i.e., salt concentration, major ingredient, fermentation period, sampling time, and manufacturing process, influenced the bacterial community composition. Particularly, samples were highly clustered by sampling time and salt concentration in non-metric multidimensional scaling plots and an analysis of similarity. These results indicated that the microbial community differed according to fermentation conditions such as salt concentration, major ingredient, fermentation period, and sampling time. Furthermore, fermentation properties, including pH, acidity, salt concentration, and microbial abundance differed between kimchi samples from household and commercial sources. Analyses of changes in bacterial ecology during fermentation will improve our understanding of the biological properties of kimchi, as well as the relationships between these properties and the microbiota of kimchi. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

    Directory of Open Access Journals (Sweden)

    You-xin Shen

    Full Text Available A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1 Does this conventional sampling strategy limit the detection of seeds of woody species? 2 Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length × 10 cm (width × 10 cm (depth, referred to as larger number of small-sized samples (LNSS in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS and placed them (10 each in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.

  18. Particle size distribution and chemical composition of total mixed rations for dairy cattle: water addition and feed sampling effects.

    Science.gov (United States)

    Arzola-Alvarez, C; Bocanegra-Viezca, J A; Murphy, M R; Salinas-Chavira, J; Corral-Luna, A; Romanos, A; Ruíz-Barrera, O; Rodríguez-Muela, C

    2010-09-01

    Four dairy farms were used to determine the effects of water addition to diets and sample collection location on the particle size distribution and chemical composition of total mixed rations (TMR). Samples were collected weekly from the mixing wagon and from 3 locations in the feed bunk (top, middle, and bottom) for 5 mo (April, May, July, August, and October). Samples were partially dried to determine the effect of moisture on particle size distribution. Particle size distribution was measured using the Penn State Particle Size Separator. Crude protein, neutral detergent fiber, and acid detergent fiber contents were also analyzed. Particle fractions 19 to 8, 8 to 1.18, and 19 mm was greater than recommended for TMR, according to the guidelines of Cooperative Extension of Pennsylvania State University. The particle size distribution in April differed from that in October, but intermediate months (May, July, and August) had similar particle size distributions. Samples from the bottom of the feed bunk had the highest percentage of particles retained on the 19-mm sieve. Samples from the top and middle of the feed bunk were similar to that from the mixing wagon. Higher percentages of particles were retained on >19, 19 to 8, and 8 to 1.18 mm sieves for wet than dried samples. The reverse was found for particles passing the 1.18-mm sieve. Mean particle size was higher for wet than dried samples. The crude protein, neutral detergent fiber, and acid detergent fiber contents of TMR varied with month of sampling (18-21, 40-57, and 21-34%, respectively) but were within recommended ranges for high-yielding dairy cows. Analyses of TMR particle size distributions are useful for proper feed bunk management and formulation of diets that maintain rumen function and maximize milk production and quality. Water addition may help reduce dust associated with feeding TMR. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Bayesian adaptive determination of the sample size required to assure acceptably low adverse event risk.

    Science.gov (United States)

    Lawrence Gould, A; Zhang, Xiaohua Douglas

    2014-03-15

    An emerging concern with new therapeutic agents, especially treatments for type 2 diabetes, a prevalent condition that increases an individual's risk of heart attack or stroke, is the likelihood of adverse events, especially cardiovascular events, that the new agents may cause. These concerns have led to regulatory requirements for demonstrating that a new agent increases the risk of an adverse event relative to a control by no more than, say, 30% or 80% with high (e.g., 97.5%) confidence. We describe a Bayesian adaptive procedure for determining if the sample size for a development program needs to be increased and, if necessary, by how much, to provide the required assurance of limited risk. The decision is based on the predictive likelihood of a sufficiently high posterior probability that the relative risk is no more than a specified bound. Allowance can be made for between-center as well as within-center variability to accommodate large-scale developmental programs, and design alternatives (e.g., many small centers, few large centers) for obtaining additional data if needed can be explored. Binomial or Poisson likelihoods can be used, and center-level covariates can be accommodated. The predictive likelihoods are explored under various conditions to assess the statistical properties of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  20. The effect of noise and sampling size on vorticity measurements in rotating fluids

    Science.gov (United States)

    Wong, Kelvin K. L.; Kelso, Richard M.; Mazumdar, Jagannath; Abbott, Derek

    2008-11-01

    This paper describes a new technique for presenting information based on given flow images. Using a multistep first order differentiation technique, we are able to map in two dimensions, vorticity of fluid within a region of investigation. We can then present the distribution of this property in space by means of a color intensity map. In particular, the state of fluid rotation can be displayed using maps of vorticity flow values. The framework that is implemented can also be used to quantify the vortices using statistical properties which can be derived from such vorticity flow maps. To test our methodology, we have devised artificial vortical flow fields using an analytical formulation of a single vortex. Reliability of vorticity measurement from our results shows that the size of flow vector sampling and noise in flow field affect the generation of vorticity maps. Based on histograms of these maps, we are able to establish an optimised configuration that computes vorticity fields to approximate the ideal vortex statistically. The novel concept outlined in this study can be used to reduce fluctuations of noise in a vorticity calculation based on imperfect flow information without excessive loss of its features, and thereby improves the effectiveness of flow

  1. Sampling design and required sample size for evaluating contamination levels of 137Cs in Japanese fir needles in a mixed deciduous forest stand in Fukushima, Japan.

    Science.gov (United States)

    Oba, Yurika; Yamada, Toshihiro

    2017-05-01

    We estimated the sample size (the number of samples) required to evaluate the concentration of radiocesium (137Cs) in Japanese fir (Abies firma Sieb. & Zucc.), 5 years after the outbreak of the Fukushima Daiichi Nuclear Power Plant accident. We investigated the spatial structure of the contamination levels in this species growing in a mixed deciduous broadleaf and evergreen coniferous forest stand. We sampled 40 saplings with a tree height of 150 cm-250 cm in a Fukushima forest community. The results showed that: (1) there was no correlation between the 137Cs concentration in needles and soil, and (2) the difference in the spatial distribution pattern of 137Cs concentration between needles and soil suggest that the contribution of root uptake to 137Cs in new needles of this species may be minor in the 5 years after the radionuclides were released into the atmosphere. The concentration of 137Cs in needles showed a strong positive spatial autocorrelation in the distance class from 0 to 2.5 m, suggesting that the statistical analysis of data should consider spatial autocorrelation in the case of an assessment of the radioactive contamination of forest trees. According to our sample size analysis, a sample size of seven trees was required to determine the mean contamination level within an error in the means of no more than 10%. This required sample size may be feasible for most sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Estimating everyday portion size using a 'method of constant stimuli': in a student sample, portion size is predicted by gender, dietary behaviour, and hunger, but not BMI.

    Science.gov (United States)

    Brunstrom, Jeffrey M; Rogers, Peter J; Pothos, Emmanuel M; Calitri, Raff; Tapper, Katy

    2008-09-01

    This paper (i) explores the proposition that body weight is associated with large portion sizes and (ii) introduces a new technique for measuring everyday portion size. In our paradigm, the participant is shown a picture of a food portion and is asked to indicate whether it is larger or smaller than their usual portion. After responding to a range of different portions an estimate of everyday portion size is calculated using probit analysis. Importantly, this estimate is likely to be robust because it is based on many responses. First-year undergraduate students (N=151) completed our procedure for 12 commonly consumed foods. As expected, portion sizes were predicted by gender and by a measure of dieting and dietary restraint. Furthermore, consistent with reports of hungry supermarket shoppers, portion-size estimates tended to be higher in hungry individuals. However, we found no evidence for a relationship between BMI and portion size in any of the test foods. We consider reasons why this finding should be anticipated. In particular, we suggest that the difference in total energy expenditure of individuals with a higher and lower BMI is too small to be detected as a concomitant difference in portion size (at least in our sample).

  3. Investigation of the effects of tumor size and type of radionuclide on tumor curability in targeted radiotherapy

    Directory of Open Access Journals (Sweden)

    Hassan Ranjbar

    2015-07-01

    Full Text Available Background: Targeted radiotherapy is one of the important methods of radiotherapy that involves the use of beta-emitting radionuclides to deliver a dose of radiation to tumor cells. An important feature of this method is the tumor size and the finite range of beta particles emitted as a result of radionuclide disintegration those have significant effects for the curability of tumors. Material and Methods: Monte Carlo simulations and mathematical models have been used to investigate the relationship of curability to tumors size for tumors treated with targeted 131I and 90Y. The model assumed that radionuclides are distributed uniformly throughout tumors. Results: The results show that there is an optimal tumor size for cure. For any given cumulated activity, cure probability is greatest for tumors whose diameter is close to the optimum value. There is a maximum value of curability that occurs at a diameter of approximately 3.5 mm for 131I. For 90Y maximum curability occurs at a tumor diameter of approximately 3.5 cm. Tumors smaller than the optimal size are less vulnerable to irradiation from radionuclides because a significant proportion of the disintegration energy escapes and is deposited outside the tumor volume. Tumors larger than the optimal size are less curable because of greater clonogenic cell number. Conclusion: With single radionuclide targeted radiotherapy, there is an optimal tumor size for tumor cure. It is suggested that single agent targeted radiotherapy should not be used for treatment of disseminated disease when multiple tumors of differing size may be present. The use of several radionuclides concurrently would be more effective than reliance on single radionuclide. This approach of using combination of radionuclides with complementary properties could hopefully prepare new measures and improve the efficiency of tumor therapy.

  4. Systematic procedure for generating operational policies to achieve target crystal size distribution (CSD) in batch cooling crystallization

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Singh, Ravendra; Sin, Gürkan

    2011-01-01

    A systematic procedure to achieve a target crystal size distribution (CSD) under generated operational policies in batch cooling crystallization is presented. An analytical CSD estimator has been employed in the systematic procedure to generate the necessary operational policies to achieve the ta...

  5. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    The proportionator is a novel and radically different approach to sampling with microscopes based on well-known statistical theory (probability proportional to size - PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section a ...

  6. Field sampling of loose erodible material: A new system to consider the full particle-size spectrum

    Science.gov (United States)

    Klose, Martina; Gill, Thomas E.; Webb, Nicholas P.; Van Zee, Justin W.

    2017-10-01

    A new system is presented to sample and enable the characterization of loose erodible material (LEM) present on a soil surface, which may be susceptible for entrainment by wind. The system uses a modified MWAC (Modified Wilson and Cooke) sediment sampler connected to a corded hand-held vacuum cleaner. Performance and accuracy of the system was tested in the laboratory using five reference soil samples with different textures. Sampling was most effective for sandy soils, while effectiveness decreases were found for soils with high silt and clay contents in dry dispersion. This effectiveness decrease can be attributed to loose silt and clay-sized particles and particle aggregates adhering to and clogging a filter attached to the MWAC outlet. Overall, the system was found to be effective in collecting sediment for most soil textures and theoretical interpretation of the measured flow speeds suggests that LEM can be sampled for a wide range of particle sizes, including dust particles. Particle-size analysis revealed that the new system is able to accurately capture the particle-size distribution (PSD) of a given sample. Only small discrepancies (maximum cumulative difference vacuuming for all test soils. Despite limitations of the system, it is an advance toward sampling the full particle-size spectrum of loose sediment available for entrainment with the overall goal to better understand the mechanisms of dust emission and their variability.

  7. Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.

    Science.gov (United States)

    Power, Stephanie M; Matic, Damir B

    2013-03-01

    Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P  =  .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.

  8. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  9. Grain size of loess and paleosol samples: what are we measuring?

    Science.gov (United States)

    Varga, György; Kovács, János; Szalai, Zoltán; Újvári, Gábor

    2017-04-01

    Particle size falling into a particularly narrow range is among the most important properties of windblown mineral dust deposits. Therefore, various aspects of aeolian sedimentation and post-depositional alterations can be reconstructed only from precise grain size data. Present study is aimed at (1) reviewing grain size data obtained from different measurements, (2) discussing the major reasons for disagreements between data obtained by frequently applied particle sizing techniques, and (3) assesses the importance of particle shape in particle sizing. Grain size data of terrestrial aeolian dust deposits (loess and paleosoil) were determined by laser scattering instruments (Fritsch Analysette 22 Microtec Plus, Horiba Partica La-950 v2 and Malvern Mastersizer 3000 with a Hydro Lv unit), while particles size and shape distributions were acquired by Malvern Morphologi G3-ID. Laser scattering results reveal that the optical parameter settings of the measurements have significant effects on the grain size distributions, especially for the fine-grained fractions (camera. However, this is only one outcome of infinite possible projections of a three-dimensional object and it cannot be regarded as a representative one. The third (height) dimension of the particles remains unknown, so the volume-based weightings are fairly dubious in the case of platy particles. Support of the National Research, Development and Innovation Office (Hungary) under contract NKFI 120620 is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.

  10. Applying Individual Tree Structure From Lidar to Address the Sensitivity of Allometric Equations to Small Sample Sizes.

    Science.gov (United States)

    Duncanson, L.; Dubayah, R.

    2015-12-01

    Lidar remote sensing is widely applied for mapping forest carbon stocks, and technological advances have improved our ability to capture structural details from forests, even resolving individual trees. Despite these advancements, the accuracy of forest aboveground biomass models remains limited by the quality of field estimates of biomass. The accuracies of field estimates are inherently dependent on the accuracy of the allometric equations used to relate measurable attributes to biomass. These equations are calibrated with relatively small samples of often spatially clustered trees. This research focuses on one of many issues involving allometric equations - understanding how sensitive allometric parameters are to the sample sizes used to fit them. We capitalize on recent advances in lidar remote sensing to extract individual tree structural information from six high-resolution airborne lidar datasets in the United States. We remotely measure millions of tree heights and crown radii, and fit allometric equations to the relationship between tree height and radius at a 'population' level, in each site. We then extract samples from our tree database, and build allometries on these smaller samples of trees, with varying sample sizes. We show that for the allometric relationship between tree height and crown radius, small sample sizes produce biased allometric equations that overestimate height for a given crown radius. We extend this analysis using translations from the literature to address potential implications for biomass, showing that site-level biomass may be greatly overestimated when applying allometric equations developed with the typically small sample sizes used in popular allometric equations for biomass.

  11. Limited sampling strategy and target attainment analysis for levofloxacin in patients with tuberculosis.

    Science.gov (United States)

    Alsultan, Abdullah; An, Guohua; Peloquin, Charles A

    2015-07-01

    There is an urgent need to improve and shorten the treatment of tuberculosis (TB) and multidrug resistant tuberculosis (MDR-TB). Levofloxacin, a newer fluoroquinolone, has potent activity against TB both in vitro and in vivo. Levofloxacin dosing can be optimized to improve the treatment of both TB and MDR-TB. Levofloxacin efficacy is linked primarily to the ratio of the area under the concentration-time curve for the free fraction of drug (fAUC) to the MIC. Since obtaining a full-time concentration profile is not feasible in the clinic, we developed a limited sampling strategy (LSS) to estimate the AUC. We also utilized Monte Carlo simulations to evaluate the dosing of levofloxacin. Pharmacokinetic data were obtained from 10 Brazilian TB patients. The pharmacokinetic data were fitted with a one-compartment model. LSSs were developed using two methods: linear regression and Bayesian approaches. Several LSSs predicted levofloxacin AUC with good accuracy and precision. The most accurate were the method using two samples collected at 4 and 6 h (R(2) = 0.91 using linear regression and 0.97 using Bayesian approaches) and that using samples collected at 2 and 6 h (R(2) = 0.90 using linear regression and 0.96 using Bayesian approaches). The 2-and-6-h approach also provides a good estimate of the maximum concentration of the drug in serum (Cmax). Our target attainment analysis showed that higher doses (17 to 20 mg/kg of body weight) of levofloxacin might be needed to improve its activity. Doses in the range of 17 to 20 mg/kg showed good target attainment for MICs from 0.25 to 0.50. At an MIC of 2, poor target attainment was observed across all doses. This LSS for levofloxacin can be used for therapeutic drug monitoring and for future pharmacokinetic/pharmacodynamic studies. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  12. Sample size and number of outcome measures of veterinary randomised controlled trials of pharmaceutical interventions funded by different sources, a cross-sectional study.

    Science.gov (United States)

    Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S

    2017-10-04

    Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.

  13. Effects of the sample size of reference population on determining BMD reference curve and peak BMD and diagnosing osteoporosis.

    Science.gov (United States)

    Hou, Y-L; Liao, E-Y; Wu, X-P; Peng, Y-Q; Zhang, H; Dai, R-C; Luo, X-H; Cao, X-Z

    2008-01-01

    Establishing reference databases generally requires a large sample size to achieve reliable results. Our study revealed that the varying sample size from hundreds to thousands of individuals has no decisive effect on the bone mineral density (BMD) reference curve, peak BMD, and diagnosing osteoporosis. It provides a reference point for determining the sample size while establishing local BMD reference databases. This study attempts to determine a suitable sample size for establishing bone mineral density (BMD) reference databases in a local laboratory. The total reference population consisted of 3,662 Chinese females aged 6-85 years. BMDs were measured with a dual-energy X-ray absorptiometry densitometer. The subjects were randomly divided into four different sample groups, that is, total number (Tn) = 3,662, 1/2n = 1,831, 1/4n = 916, and 1/8n = 458. We used the best regression model to determine BMD reference curve and peak BMD. There was no significant difference in the full curves between the four sample groups at each skeletal site, although some discrepancy at the end of the curves was observed at the spine. Peak BMDs were very similar in the four sample groups. According to the Chinese diagnostic criteria (BMD >25% below the peak BMD as osteoporosis), no difference was observed in the osteoporosis detection rate using the reference values determined by the four different sample groups. Varying the sample size from hundreds to thousands has no decisive effect on establishing BMD reference curve and determining peak BMD. It should be practical for determining the reference population while establishing local BMD databases.

  14. Target preparation and characterization for multielemental analysis of liquid samples by use of accelerators

    CERN Document Server

    Liendo, J A; Fletcher, N R; Gómez, J; Caussyn, D D; Myers, S H; Castelli, C; Sajo-Bohus, L

    1999-01-01

    Elastic scattering at forward angles is tested as a useful alternative method to characterize liquid samples of scientific and/or technological interest. Solid residues of such samples deposited on light backings have been bombarded with 16 MeV sup 7 Li and 24 MeV sup 1 sup 6 O beams in order to determine the experimental configuration giving the best elemental mass separation. The elastically scattered ions were detected at 16 deg. , 20 deg. and 28 deg. with surface barrier detectors. The ratios between the mass separation and the line width obtained in the spectral region between carbon and oxygen varied between 2 and 13. This method is particularly useful for an accurate elemental characterization below sodium which is beyond the scope of standard techniques such as PIXE and TXRF provided the ion beam type, its kinetic energy and the target thickness are considered simultaneously.

  15. Clipped speckle autocorrelation metric for spot size characterization of focused beam on a diffuse target

    National Research Council Canada - National Science Library

    Li, Yuanyang; Guo, Jin; Liu, Lisheng; Wang, Tingfeng; Tang, Wei; Jiang, Zhenhua

    2015-01-01

    The clipped speckle autocorrelation (CSA) metric is proposed for estimating the laser beam energy concentration on a remote diffuse target in a laser beam projection system with feedback information...

  16. Size of lethality target in mouse immature oocytes determined with accelerated heavy ions.

    Science.gov (United States)

    Straume, T; Dobson, R L; Kwan, T C

    1989-01-01

    Mouse immature oocytes were irradiated in vivo with highly charged, heavy ions from the Bevalac accelerator at the Lawrence Berkeley Laboratory. The particles used were 670-MeV/nucleon Si14+, 570-MeV/nucleon Ar18+, and 450-MeV/nucleon Fe26+. The cross-sectional area of the lethality target in these extremely radiosensitive cells was determined from fluence-response curves and information on energy deposition by delta rays. Results indicate a target cross-section larger than that of the nucleus, one which closely approximates the cross-sectional area of the entire oocyte. For 450-MeV/nucleon Fe26+ particles, the predicted target cross-sectional area is 120 +/- 16 microns2, comparing well with the microscopically determined cross-sectional area of 111 +/- 12 microns2 for these cells. The present results are in agreement with our previous target studies which implicate the oocyte plasma membrane.

  17. Non-target time trend screening: a data reduction strategy for detecting emerging contaminants in biological samples.

    Science.gov (United States)

    Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P

    2016-06-01

    Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract

  18. The Quantitative LOD Score: Test Statistic and Sample Size for Exclusion and Linkage of Quantitative Traits in Human Sibships

    OpenAIRE

    Page, Grier P.; Amos, Christopher I.; Boerwinkle, Eric

    1998-01-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, ...

  19. Development of an automated multiple-target mask CD disposition system to enable new sampling strategy

    Science.gov (United States)

    Ma, Jian; Farnsworth, Jeff; Bassist, Larry; Cui, Ying; Mammen, Bobby; Padmanaban, Ramaswamy; Nadamuni, Venkatesh; Kamath, Muralidhar; Buckmann, Ken; Neff, Julie; Freiberger, Phil

    2006-03-01

    Traditional mask critical dimension (CD) disposition systems with only one or two targets is being challenged by the new requirements from mask-users as the wafer process control becomes more complicated in the newer generation of technologies. Historically, the mask shop does not necessarily measure and disposition off the same kind of CD structures that wafer fabs do. Mask disposition specifications and structures come from the frame-design and the tapeout, while wafer-level CD dispositions are mainly based on the historical process window established per CD-skew experiments and EOL (end of line) yield. In the current high volume manufacturing environment, the mask CDs are mainly dispositioned off their mean-to-target (MTT) and uniformity (6sigma) on one or two types of pre-determined structures. The disposition specification is set to ensure the printed mask will meet the design requirements and to ensure minimum deviation from them. The CD data are also used to adjust the dose of the mask exposure tools to control CD MTT. As a result, the mask CD disposition automation system was built to allow only one or two kinds of targets at most. In contrast, wafer-fabs measure a fairly wide range of different structures to ensure their process is on target and in control. The number of such structures that are considered critical is increasing due the growing complexity of the technology. To fully comprehend the wafer-level requirements, it is highly desirable to align the mask CD sample site and disposition to be the same as that of the wafer-fabs, to measure the OPC (optical proximity correction) structures or equivalent whenever possible, and to establish the true correlation between mask CD measurements vs. wafer CD measurement. In this paper, the development of an automated multiple-target mask CD disposition system with the goal of enabling new sampling strategy is presented. The pros and cons of its implementation are discussed. The new system has been inserted in

  20. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...... in publications, sample size calculations and statistical methods were often explicitly discrepant with the protocol or not pre-specified. Such amendments were rarely acknowledged in the trial publication. The reliability of trial reports cannot be assessed without having access to the full protocols......OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...

  1. Target prediction and a statistical sampling algorithm for RNA–RNA interaction

    Science.gov (United States)

    Huang, Fenix W. D.; Qin, Jing; Reidys, Christian M.; Stadler, Peter F.

    2010-01-01

    Motivation: It has been proven that the accessibility of the target sites has a critical influence on RNA–RNA binding, in general and the specificity and efficiency of miRNAs and siRNAs, in particular. Recently, O(N6) time and O(N4) space dynamic programming (DP) algorithms have become available that compute the partition function of RNA–RNA interaction complexes, thereby providing detailed insights into their thermodynamic properties. Results: Modifications to the grammars underlying earlier approaches enables the calculation of interaction probabilities for any given interval on the target RNA. The computation of the ‘hybrid probabilities’ is complemented by a stochastic sampling algorithm that produces a Boltzmann weighted ensemble of RNA–RNA interaction structures. The sampling of k structures requires only negligible additional memory resources and runs in O(k·N3). Availability: The algorithms described here are implemented in C as part of the rip package. The source code of rip2 can be downloaded from http://www.combinatorics.cn/cbpc/rip.html and http://www.bioinf.uni-leipzig.de/Software/rip.html. Contact: duck@santafe.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19910305

  2. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    Science.gov (United States)

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for

  3. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

    Science.gov (United States)

    Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

    2017-10-01

    Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

  4. Targeted histology sampling from atypical small acinar proliferation area detected by repeat transrectal prostate biopsy

    Directory of Open Access Journals (Sweden)

    A. V. Karman

    2017-01-01

    Full Text Available Оbjective: to define the approach to the management of patients with the detected ASAP area.Materials and methods. In the time period from 2012 through 2015, 494 patients with previously negative biopsy and remaining suspicion of prostate cancer (PCa were examined. The patients underwent repeat 24-core multifocal prostate biopsy with taking additional tissue samples from suspicious areas detected by multiparametric magnetic resonance imaging and transrectal ultrasound. An isolated ASAP area was found in 127 (25. 7 % of the 494 examined men. All of them were offered to perform repeat target transrectal biopsy of this area. Targeted transrectal ultrasound guided biopsy of the ASAP area was performed in 56 (44.1 % of the 127 patients, 53 of them being included in the final analysis.Results. PCa was diagnosed in 14 (26.4 % of the 53 patients, their mean age being 64.4 ± 6.9 years. The average level of prostate-specific antigen (PSA in PCa patients was 6.8 ± 3.0 ng/ml, in those with benign lesions – 9.3 ± 6.5 ng/ml; the percentage ratio of free/total PSA with PCa was 16.2 ± 7,8 %, with benign lesions – 23.3 ± 7.7 %; PSA density in PCa patients was 0.14 ± 0.07 ng/ml/cm3, in those with benign lesions – 0.15 ± 0.12 ng/ml/cm3. Therefore, with ASAP area being detected in repeat prostate biopsy samples, it is advisable that targeted extended biopsy of this area be performed. 

  5. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  6. Survey and Rapid detection of Bordetella pertussis in clinical samples targeting the BP485 in China

    Directory of Open Access Journals (Sweden)

    Wei eLiu

    2015-03-01

    Full Text Available Bordetella pertussis is an important human respiratory pathogen. Here, we describe a loop-mediated isothermal amplification (LAMP method for the rapid detection of B. pertussis in clinical samples based on a visual test. The LAMP assay detected the BP485 target sequence within 60 min with a detection limit of 1.3 pg/µl, a 10-fold increase in sensitivity compared with conventional PCR. All 31 non-pertussis respiratory pathogens tested were negative for LAMP detection, indicating the high specificity of the primers for B. pertussis. To evaluate the application of the LAMP assay to clinical diagnosis, of 105 sputum and nasopharyngeal samples collected from the patients with suspected respiratory infections in China, a total of 12 Bordetella pertussis isolates were identified from 33 positive samples detected by LAMP-based surveillance targeting BP485. Strikingly, a 4.5 months old baby and her mother were found to be infected with B. pertussis at the same time. All isolates belonged to different B. pertussis multilocus sequence typing (MLST groups with different alleles of the virulence-related genes including 4 alleles of ptxA, 6 of prn, 4 of tcfA, 2 of fim2 and 3 of fim3. The diversity of B. pertussis carrying toxin genes in clinical strains indicates a rapid and continuing evolution of B. pertussis. This combined with its high prevalence will make it difficult to control. In conclusion, we have developed a visual detection LAMP assay, which could be a useful tool for rapid B. pertussis detection, especially in situations where resources are poor and in point-of-care tests.

  7. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    Science.gov (United States)

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  8. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    Science.gov (United States)

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  9. Molecular target size of the vanilloid (capsaicin) receptor in pig dorsal root ganglia

    Energy Technology Data Exchange (ETDEWEB)

    Szallasi, A.; Blumberg, P.M. (National Cancer Institute, Bethesda, MD (USA))

    1991-01-01

    The size of the vanilloid receptor was examined by high-energy radiation inactivation analysis of the binding of ({sup 3}H)resiniferatoxin to pig dorsal root ganglion membranes; it was found to be 270 {plus minus} 25 kDa. This value most likely represents the size of a receptor complex rather than of an individual subunit. Other ligand-gated cation channel complexes have reported molecular weights in this range, e.g. 300 kDa for the acetylcholine receptor.

  10. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  11. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    SUMMARY Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average meas...... subsp. paratuberculosis infection, in Danish dairy cattle and a study on critical control points for Salmonella cross-contamination of pork, in Greek slaughterhouses....

  12. Impact of metric and sample size on determining malaria hotspot boundaries

    NARCIS (Netherlands)

    Stresman, G.H.; Giorgi, E.; Baidjoe, A.Y.; Knight, P.; Odongo, W.; Owaga, C.; Shagari, S.; Makori, E.; Stevenson, J.; Drakeley, C.; Cox, J.; Bousema, T.; Diggle, P.J.

    2017-01-01

    The spatial heterogeneity of malaria suggests that interventions may be targeted for maximum impact. It is unclear to what extent different metrics lead to consistent delineation of hotspot boundaries. Using data from a large community-based malaria survey in the western Kenyan highlands, we

  13. Nano-sized metabolic precursors for heterogeneous tumor-targeting strategy using bioorthogonal click chemistry in vivo.

    Science.gov (United States)

    Lee, Sangmin; Jung, Seulhee; Koo, Heebeom; Na, Jin Hee; Yoon, Hong Yeol; Shim, Man Kyu; Park, Jooho; Kim, Jong-Ho; Lee, Seulki; Pomper, Martin G; Kwon, Ick Chan; Ahn, Cheol-Hee; Kim, Kwangmeyung

    2017-12-01

    Herein, we developed nano-sized metabolic precursors (Nano-MPs) for new tumor-targeting strategy to overcome the intrinsic limitations of biological ligands such as the limited number of biological receptors and the heterogeneity in tumor tissues. We conjugated the azide group-containing metabolic precursors, triacetylated N-azidoacetyl-d-mannosamine to generation 4 poly(amidoamine) dendrimer backbone. The nano-sized dendrimer of Nano-MPs could generate azide groups on the surface of tumor cells homogeneously regardless of cell types via metabolic glycoengineering. Importantly, these exogenously generated 'artificial chemical receptors' containing azide groups could be used for bioorthogonal click chemistry, regardless of phenotypes of different tumor cells. Furthermore, in tumor-bearing mice models, Nano-MPs could be mainly localized at the target tumor tissues by the enhanced permeation and retention (EPR) effect, and they successfully generated azide groups on tumor cells in vivo after an intravenous injection. Finally, we showed that these azide groups on tumor tissues could be used as 'artificial chemical receptors' that were conjugated to bioorthogonal chemical group-containing liposomes via in vivo click chemistry in heterogeneous tumor-bearing mice. Therefore, overall results demonstrated that our nano-sized metabolic precursors could be extensively applied to new alternative tumor-targeting technique for molecular imaging and drug delivery system, regardless of the phenotype of heterogeneous tumor cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Validation of fixed sample size plans for monitoring lepidopteran pests of Brassica oleracea crops in North Korea.

    Science.gov (United States)

    Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J

    2009-06-01

    The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.

  15. Sample size estimates for determining treatment effects in high-risk patients with early relapsing-remitting multiple sclerosis.

    Science.gov (United States)

    Scott, Thomas F; Schramke, Carol J; Cutter, Gary

    2003-06-01

    Risk factors for short-term progression in early relapsing remitting MS have been identified recently. Previously we determined potential risk factors for rapid progression of early relapsing remitting MS and identified three groups of high-risk patients. These non-mutually exclusive groups of patients were drawn from a consecutively studied sample of 98 patients with newly diagnosed MS. High-risk patients had a history of either poor recovery from initial attacks, more than two attacks in the first two years of disease, or a combination of at least four other risk factors. To determine differences in sample sizes required to show a meaningful treatment effect when using a high-risk sample versus a random sample of patients. Power analyses were used to calculate the different sample sizes needed for hypothetical treatment trials. We found that substantially smaller numbers of patients should be needed to show a significant treatment effect by employing these high-risk groups of patients as compared to a random population of MS patients (e.g., 58% reduction in sample size in one model). The use of patients at higher risk of progression to perform drug treatment trials can be considered as a means to reduce the number of patients needed to show a significant treatment effect for patients with very early MS.

  16. MANIPULATING TARGET SIZE INFLUENCES PERCEPTIONS OF SUCCESS WHEN LEARNING A DART-THROWING SKILL BUT DOES NOT IMPACT RETENTION

    Directory of Open Access Journals (Sweden)

    Nicole T Ong

    2015-09-01

    Full Text Available Positive feedback or experiences of success during skill acquisition have been shown to benefit motor skill learning. In this study, our aim was to manipulate learners’ success perceptions through a minor adjustment to goal criterion (target size in a dart-throwing task. Two groups of novice participants practiced throwing at a large (easy or a small (difficult target from the same distance. In reference to the origin/centre of the target, the practice targets were alike in objective difficulty and indeed participants in both groups were not different in their objective practice performance (i.e. radial error from the centre. Although the groups experienced markedly different success rates, with the large target group experiencing more hits and reporting greater confidence (or self-efficacy than the small target group, these practice effects were not carried into longer-term retention, which was assessed after a one-week delay. For success perceptions to moderate or benefit motor learning, we argue that unambiguous indicators of positive performance are necessary, especially for tasks where intrinsic feedback about objective error is salient.

  17. A simple method to generate equal-sized homogenous strata or clusters for population-based sampling.

    Science.gov (United States)

    Elliott, Michael R

    2011-04-01

    Statistical efficiency and cost efficiency can be achieved in population-based samples through stratification and/or clustering. Strata typically combine subgroups of the population that are similar with respect to an outcome. Clusters are often taken from preexisting units, but may be formed to minimize between-cluster variance, or to equalize exposure to a treatment or risk factor. Area probability sample design procedures for the National Children's Study required contiguous strata and clusters that maximized within-stratum and within-cluster homogeneity while maintaining approximately equal size of the strata or clusters. However, there were few methods that allowed such strata or clusters to be constructed under these contiguity and equal size constraints. A search algorithm generates equal-size cluster sets that approximately span the space of all possible clusters of equal size. An optimal cluster set is chosen based on analysis of variance and convexity criteria. The proposed algorithm is used to construct 10 strata based on demographics and air pollution measures in Kent County, MI, following census tract boundaries. A brief simulation study is also conducted. The proposed algorithm is effective at uncovering underlying clusters from noisy data. It can be used in multi-stage sampling where equal-size strata or clusters are desired. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    Energy Technology Data Exchange (ETDEWEB)

    Baz-Lomba, J.A., E-mail: jba@niva.no [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway); Faculty of Medicine, University of Oslo, PO box 1078 Blindern, 0316, Oslo (Norway); Reid, Malcolm J.; Thomas, Kevin V. [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway)

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS{sup e}. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L{sup −1}). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS{sup e} data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow

  19. Effects of sample size on differential gene expression, rank order and prediction accuracy of a gene signature.

    Directory of Open Access Journals (Sweden)

    Cynthia Stretch

    Full Text Available Top differentially expressed gene lists are often inconsistent between studies and it has been suggested that small sample sizes contribute to lack of reproducibility and poor prediction accuracy in discriminative models. We considered sex differences (69♂, 65 ♀ in 134 human skeletal muscle biopsies using DNA microarray. The full dataset and subsamples (n = 10 (5 ♂, 5 ♀ to n = 120 (60 ♂, 60 ♀ thereof were used to assess the effect of sample size on the differential expression of single genes, gene rank order and prediction accuracy. Using our full dataset (n = 134, we identified 717 differentially expressed transcripts (p<0.0001 and we were able predict sex with ~90% accuracy, both within our dataset and on external datasets. Both p-values and rank order of top differentially expressed genes became more variable using smaller subsamples. For example, at n = 10 (5 ♂, 5 ♀, no gene was considered differentially expressed at p<0.0001 and prediction accuracy was ~50% (no better than chance. We found that sample size clearly affects microarray analysis results; small sample sizes result in unstable gene lists and poor prediction accuracy. We anticipate this will apply to other phenotypes, in addition to sex.

  20. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Science.gov (United States)

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  1. Analytical solutions to sampling effects in drop size distribution measurements during stationary rainfall: Estimation of bulk rainfall variables

    NARCIS (Netherlands)

    Uijlenhoet, R.; Porrà, J.M.; Sempere Torres, D.; Creutin, J.D.

    2006-01-01

    A stochastic model of the microstructure of rainfall is used to derive explicit expressions for the magnitude of the sampling fluctuations in rainfall properties estimated from raindrop size measurements in stationary rainfall. The model is a marked point process, in which the points represent the

  2. Survey Research: Determining Sample Size and Representative Response. and The Effects of Computer Use on Keyboarding Technique and Skill.

    Science.gov (United States)

    Wunsch, Daniel R.; Gades, Robert E.

    1986-01-01

    Two articles are presented. The first reviews and suggests procedures for determining appropriate sample sizes and for determining the response representativeness in survey research. The second presents a study designed to determine the effects of computer use on keyboarding technique and skill. (CT)

  3. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    Science.gov (United States)

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  4. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    Science.gov (United States)

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF.

    Science.gov (United States)

    Baz-Lomba, J A; Reid, Malcolm J; Thomas, Kevin V

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS(e). Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4-187 ng L(-1)). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS(e) data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Bayesian adaptive approach to estimating sample sizes for seizures of illicit drugs.

    Science.gov (United States)

    Moroni, Rossana; Aalberg, Laura; Reinikainen, Tapani; Corander, Jukka

    2012-01-01

    A considerable amount of discussion can be found in the forensics literature about the issue of using statistical sampling to obtain for chemical analyses an appropriate subset of units from a police seizure suspected to contain illicit material. Use of the Bayesian paradigm has been suggested as the most suitable statistical approach to solving the question of how large a sample needs to be to ensure legally and practically acceptable purposes. Here, we introduce a hypergeometric sampling model combined with a specific prior distribution for the homogeneity of the seizure, where a parameter for the analyst's expectation of homogeneity (α) is included. Our results show how an adaptive approach to sampling can minimize the practical efforts needed in the laboratory analyses, as the model allows the scientist to decide sequentially how to proceed, while maintaining a sufficiently high confidence in the conclusions. © 2011 American Academy of Forensic Sciences.

  7. Measuring proteins with greater speed and resolution while reducing sample size

    OpenAIRE

    Hsieh, Vincent H.; Wyatt, Philip J.

    2017-01-01

    A multi-angle light scattering (MALS) system, combined with chromatographic separation, directly measures the absolute molar mass, size and concentration of the eluate species. The measurement of these crucial properties in solution is essential in basic macromolecular characterization and all research and production stages of bio-therapeutic products. We developed a new MALS methodology that has overcome the long-standing, stubborn barrier to microliter-scale peak volumes and achieved the hi...

  8. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  9. Salmonella Enteritidis in Layer Farms of Different Sizes Located in Northern China: On-Farm Sampling and Detection by the PCR Method

    Directory of Open Access Journals (Sweden)

    X Li

    Full Text Available ABSTRACT Salmonella enterica subspecies enterica serotype Enteritidis (SE has caused foodborne infections over decades. It is transmitted mainly from contaminated eggs to humans. SE is commonly present in layer houses, and closely interacts with environmental factors. The objective of the present study was to develop a viable PCR method to identify SE in environmental samples collected in layer farms of different sizes, and to evaluate SE contamination status in four main egg-production provinces of northern China. After specificity retrieval using Primer-BLAST against the NCBI database, three SE specific oligonucleotide primers were selected as candidate primers. The primers targeting Prot6e gene were adopted and primers targeting Sdf I were also selected to validate the results, after testing eight different types of pooled poultry environmental samples (overshoe, air, drinking nipple, feed, egg collection belt, eggshell, air inlet, and air outlet by PCR. A PCR detection limit of 1 CFU/mL was determined using cell lysates from pure cultures. Testing time was less than 48 h. On-farm samples were collected from two layer farm sizes (one housing more than 50,000 layers, and the other, less than 50,000 layers in each province. The applied PCR method was shown to be simple, inexpensive and effective for screening SE in a large amount of farm samples. The study identified only one SE-positive farm, which a large farm and where nine samples were found to be contaminated with SE: drinking nipples (3, egg collection belt (1, air inlet (1, air (1, overshoe (1 and eggshell (2.

  10. Sample Size Effect of Magnetomechanical Response for Magnetic Elastomers by Using Permanent Magnets

    Directory of Open Access Journals (Sweden)

    Tsubasa Oguro

    2017-01-01

    Full Text Available The size effect of magnetomechanical response of chemically cross-linked disk shaped magnetic elastomers placed on a permanent magnet has been investigated by unidirectional compression tests. A cylindrical permanent magnet with a size of 35 mm in diameter and 15 mm in height was used to create the magnetic field. The magnetic field strength was approximately 420 mT at the center of the upper surface of the magnet. The diameter of the magnetoelastic polymer disks was varied from 14 mm to 35 mm, whereas the height was kept constant (5 mm in the undeformed state. We have studied the influence of the disk diameter on the stress-strain behavior of the magnetoelastic in the presence and in the lack of magnetic field. It was found that the smallest magnetic elastomer with 14 mm diameter did not exhibit measurable magnetomechanical response due to magnetic field. On the opposite, the magnetic elastomers with diameters larger than 30 mm contracted in the direction parallel to the mechanical stress and largely elongated in the perpendicular direction. An explanation is put forward to interpret this size-dependent behavior by taking into account the nonuniform field distribution of magnetic field produced by the permanent magnet.

  11. Integrated targeted and non-targeted analysis of water sample extracts with micro-scale UHPLC–MS

    Directory of Open Access Journals (Sweden)

    Dominik Deyerling

    2015-01-01

    • The filtering of database hits for two criteria (exact mass and partition coefficient significantly reduced the list of suspects and at the same time rendered it possible to perform non-target analysis with lower mass accuracy (no lock-spray in the range of 20–500 ppm.

  12. Use of High-Frequency In-Home Monitoring Data May Reduce Sample Sizes Needed in Clinical Trials.

    Directory of Open Access Journals (Sweden)

    Hiroko H Dodge

    Full Text Available Trials in Alzheimer's disease are increasingly focusing on prevention in asymptomatic individuals. This poses a challenge in examining treatment effects since currently available approaches are often unable to detect cognitive and functional changes among asymptomatic individuals. Resultant small effect sizes require large sample sizes using biomarkers or secondary measures for randomized controlled trials (RCTs. Better assessment approaches and outcomes capable of capturing subtle changes during asymptomatic disease stages are needed.We aimed to develop a new approach to track changes in functional outcomes by using individual-specific distributions (as opposed to group-norms of unobtrusive continuously monitored in-home data. Our objective was to compare sample sizes required to achieve sufficient power to detect prevention trial effects in trajectories of outcomes in two scenarios: (1 annually assessed neuropsychological test scores (a conventional approach, and (2 the likelihood of having subject-specific low performance thresholds, both modeled as a function of time.One hundred nineteen cognitively intact subjects were enrolled and followed over 3 years in the Intelligent Systems for Assessing Aging Change (ISAAC study. Using the difference in empirically identified time slopes between those who remained cognitively intact during follow-up (normal control, NC and those who transitioned to mild cognitive impairment (MCI, we estimated comparative sample sizes required to achieve up to 80% statistical power over a range of effect sizes for detecting reductions in the difference in time slopes between NC and MCI incidence before transition.Sample size estimates indicated approximately 2000 subjects with a follow-up duration of 4 years would be needed to achieve a 30% effect size when the outcome is an annually assessed memory test score. When the outcome is likelihood of low walking speed defined using the individual-specific distributions of

  13. Measurements of Plutonium and Americium in Soil Samples from Project 57 using the Suspended Soil Particle Sizing System (SSPSS)

    Energy Technology Data Exchange (ETDEWEB)

    John L. Bowen; Rowena Gonzalez; David S. Shafer

    2001-05-01

    As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site.

  14. Experimental Assessment of Linear Sampling and Factorization Methods for Microwave Imaging of Concealed Targets

    Directory of Open Access Journals (Sweden)

    M. N. Akıncı

    2015-01-01

    Full Text Available Shape reconstruction methods are particularly well suited for imaging of concealed targets. Yet, these methods are rarely employed in real nondestructive testing applications, since they generally require the electrical parameters of outer object as a priori knowledge. In this regard, we propose an approach to relieve two well known shape reconstruction algorithms, which are the linear sampling and the factorization methods, from the requirement of the a priori knowledge on electrical parameters of the surrounding medium. The idea behind this paper is that if a measurement of the reference medium (a medium which can approximate the material, except the inclusion can be supplied to these methods, reconstructions with very high qualities can be obtained even when there is no information about the electrical parameters of the surrounding medium. Taking the advantage of this idea, we consider that it is possible to use shape reconstruction methods in buried object detection. To this end, we perform several experiments inside an anechoic chamber to verify the approach against real measurements. Accuracy and stability of the obtained results show that both the linear sampling and the factorization methods can be quite useful for various buried obstacle imaging problems.

  15. Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study.

    Science.gov (United States)

    Teare, M Dawn; Dimairo, Munyaradzi; Shephard, Neil; Hayman, Alex; Whitehead, Amy; Walters, Stephen J

    2014-07-03

    External pilot or feasibility studies can be used to estimate key unknown parameters to inform the design of the definitive randomised controlled trial (RCT). However, there is little consensus on how large pilot studies need to be, and some suggest inflating estimates to adjust for the lack of precision when planning the definitive RCT. We use a simulation approach to illustrate the sampling distribution of the standard deviation for continuous outcomes and the event rate for binary outcomes. We present the impact of increasing the pilot sample size on the precision and bias of these estimates, and predicted power under three realistic scenarios. We also illustrate the consequences of using a confidence interval argument to inflate estimates so the required power is achieved with a pre-specified level of confidence. We limit our attention to external pilot and feasibility studies prior to a two-parallel-balanced-group superiority RCT. For normally distributed outcomes, the relative gain in precision of the pooled standard deviation (SDp) is less than 10% (for each five subjects added per group) once the total sample size is 70. For true proportions between 0.1 and 0.5, we find the gain in precision for each five subjects added to the pilot sample is less than 5% once the sample size is 60. Adjusting the required sample sizes for the imprecision in the pilot study estimates can result in excessively large definitive RCTs and also requires a pilot sample size of 60 to 90 for the true effect sizes considered here. We recommend that an external pilot study has at least 70 measured subjects (35 per group) when estimating the SDp for a continuous outcome. If the event rate in an intervention group needs to be estimated by the pilot then a total of 60 to 100 subjects is required. Hence if the primary outcome is binary a total of at least 120 subjects (60 in each group) may be required in the pilot trial. It is very much more efficient to use a larger pilot study, than to

  16. Probability of downsizing primary tumors of renal cell carcinoma by targeted therapies is related to size at presentation.

    Science.gov (United States)

    Kroon, Bin K; de Bruijn, Roderick; Prevoo, Warner; Horenblas, Simon; Powles, Thomas; Bex, Axel

    2013-01-01

    To evaluate the probability of downsizing primary renal tumors by targeted therapy in correlation to size. A literature search was conducted and our own data were pooled with data of retrospective series and prospective trials in which patients were treated with tyrosine kinase inhibitors (TKIs) and in which tumor sizes before and after treatment were reported. Included were 89 primary clear cell renal tumors, including 34 from our institutes. The longest diameter of the primary tumors before and after treatment was obtained. Primary tumor size at presentation was divided in 4 categories: 10 cm (n=27). Pearson correlation and t test were used for statistical analysis. The TKI was sorafenib in 21 tumors and sunitinib in the remaining 68. Smaller tumor size was related to more effective downsizing (P=0.01). Median downsizing was 32% (-46% to 11%) in the first group (downsizing was 18% (-39% to 2%) in tumors of 7 to 10 cm and 10% (-31% to 0%) in those>10 cm. The smaller the primary tumor, the greater the likelihood and the more effective the downsizing. A potential benefit of neoadjuvant treatment to downsize the primary tumor for ablative techniques or nephron-sparing surgery may exist, particularly in tumors sized 5 to 7 cm. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. A convenient method and numerical tables for sample size determination in longitudinal-experimental research using multilevel models.

    Science.gov (United States)

    Usami, Satoshi

    2014-12-01

    Recent years have shown increased awareness of the importance of sample size determination in experimental research. Yet effective and convenient methods for sample size determination, especially in longitudinal experimental design, are still under development, and application of power analysis in applied research remains limited. This article presents a convenient method for sample size determination in longitudinal experimental research using a multilevel model. A fundamental idea of this method is transformation of model parameters (level 1 error variance [σ(2)], level 2 error variances [τ 00, τ 11] and its covariance [τ 01, τ 10], and a parameter representing experimental effect [δ]) into indices (reliability of measurement at the first time point [ρ 1], effect size at the last time point [Δ T ], proportion of variance of outcomes between the first and the last time points [k], and level 2 error correlation [r]) that are intuitively understandable and easily specified. To foster more convenient use of power analysis, numerical tables are constructed that refer to ANOVA results to investigate the influence on statistical power by respective indices.

  18. [Comparison of characteristics of heavy metals in different grain sizes of intertidalite sediment by using grid sampling method].

    Science.gov (United States)

    Liang, Tao; Chen, Yan; Zhang, Chao-sheng; Li, Hai-tao; Chong, Zhong-yi; Song, Wen-chong

    2008-02-01

    384 surface sediment samples were collected from mud flat, silt flat and mud-silt flat of Bohai Bay by 1 m and 10 m interval using grid sampling method. Concentrations of Al, Fe, Ti, Mn, Ba, Sr, Zn, Cr, Ni and Cu in each sample were measured by ICP-AES. To figure out the random distribution and concentration characteristics of these heavy metals, concentration of them were compared between districts with different grain size. The results show that varieties of grain size cause the remarkable difference in the concentration of heavy metals. Total concentration of heavy metals are 147.37 g x kg(-1), 98.68 g x kg(-1) and 94.27 g x kg(-1) in mud flat, mud-silt flat and silt flat respectively. Majority of heavy metals inclines to concentrate in fine grained mud, while Ba and Sr have a tendency to concentrate in coast grained silt which contains more K2O * Al2O3 * 6SiO2. Concentration of Sr is affected significantly by the grain size, while concentration of Cr and Ti are affected a little by the grain size.

  19. Estimation of the target stem-cell population size in chronic myeloid leukemogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Radivoyevitch, T. [Department of Biometry and Epidemiology, Medical University of South Carolina, Charleston, SC 29425 (United States); Ramsey, M.J.; Tucker, J.D. [Biology and Biotechnology Research Program, L-452, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States)

    1999-09-01

    Estimation of the number of hematopoietic stem cells capable of causing chronic myeloid leukemia (CML) is relevant to the development of biologically based risk models of radiation-induced CML. Through a comparison of the age structure of CML incidence data from the Surveillance, Epidemiology, and End Results (SEER) Program and the age structure of chromosomal translocations found in healthy subjects, the number of CML target stem cells is estimated for individuals above 20 years of age. The estimation involves three steps. First, CML incidence among adults is fit to an exponentially increasing function of age. Next, assuming a relatively short waiting time distribution between BCR-ABL induction and the appearance of CML, an exponential age function with rate constants fixed to the values found for CML is fitted to the translocation data. Finally, assuming that translocations are equally likely to occur between any two points in the genome, the parameter estimates found in the first two steps are used to estimate the number of target stem cells for CML. The population-averaged estimates of this number are found to be 1.86 x 10{sup 8} for men and 1.21 x 10{sup 8} for women; the 95% confidence intervals of these estimates are (1.34 x 10{sup 8}, 2.50 x 10{sup 8}) and (0.84 x 10{sup 8}, 1.83 x 10{sup 8}), respectively. (orig.)

  20. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort.

    Science.gov (United States)

    Cantarello, Elena; Steck, Claude E; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

    2010-03-01

    Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy's regions (average area 15,000 km(2)) and provinces (2,900 km(2)). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

  1. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort

    Science.gov (United States)

    Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

    2010-03-01

    Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

  2. Early detection of nonnative alleles in fish populations: When sample size actually matters

    Science.gov (United States)

    Croce, Patrick Della; Poole, Geoffrey C.; Payne, Robert A.; Gresswell, Bob

    2017-01-01

    Reliable detection of nonnative alleles is crucial for the conservation of sensitive native fish populations at risk of introgression. Typically, nonnative alleles in a population are detected through the analysis of genetic markers in a sample of individuals. Here we show that common assumptions associated with such analyses yield substantial overestimates of the likelihood of detecting nonnative alleles. We present a revised equation to estimate the likelihood of detecting nonnative alleles in a population with a given level of admixture. The new equation incorporates the effects of the genotypic structure of the sampled population and shows that conventional methods overestimate the likelihood of detection, especially when nonnative or F-1 hybrid individuals are present. Under such circumstances—which are typical of early stages of introgression and therefore most important for conservation efforts—our results show that improved detection of nonnative alleles arises primarily from increasing the number of individuals sampled rather than increasing the number of genetic markers analyzed. Using the revised equation, we describe a new approach to determining the number of individuals to sample and the number of diagnostic markers to analyze when attempting to monitor the arrival of nonnative alleles in native populations.

  3. Children's Use of Sample Size and Diversity Information within Basic-Level Categories.

    Science.gov (United States)

    Gutheil, Grant; Gelman, Susan A.

    1997-01-01

    Three studies examined the ability of 8- and 9-year-olds and young adults to use sample monotonicity and diversity information according to the similarity-coverage model of category-based induction. Found that children's difficulty with this information was independent of category level, and may be based on preferences for other strategies…

  4. Joint risk of interbasin water transfer and impact of the window size of sampling low flows under environmental change

    Science.gov (United States)

    Tu, Xinjun; Du, Xiaoxia; Singh, Vijay P.; Chen, Xiaohong; Du, Yiliang; Li, Kun

    2017-11-01

    Constructing a joint distribution of low flows between the donor and recipient basins and analyzing their joint risk are commonly required for implementing interbasin water transfer. In this study, daily streamflow data of bi-basin low flows were sampled at window sizes from 3 to183 days by using the annual minimum method. The stationarity of low flows was tested by a change point analysis and non-stationary low flows were reconstructed by using the moving mean method. Three bivariate Archimedean copulas and five common univariate distributions were applied to fit the joint and marginal distributions of bi-basin low flows. Then, by considering the window size of sampling low flows under environmental change, the change in the joint risk of interbasin water transfer was investigated. Results showed that the non-stationarity of low flows in the recipient basin at all window sizes was significant due to the regulation of water reservoirs. The general extreme value distribution was found to fit the marginal distributions of bi-basin low flows. Three Archimedean copulas satisfactorily fitted the joint distribution of bi-basin low flows and then the Frank copula was found to be the comparatively better. The moving mean method differentiated the location parameter of the GEV distribution, but did not differentiate the scale and shape parameters, and the copula parameters. Due to environmental change, in particular the regulation of water reservoirs in the recipient basin, the decrease of the joint synchronous risk of bi-basin water shortage was slight, but those of the synchronous assurance of water transfer from the donor were remarkable. With the enlargement of window size of sampling low flows, both the joint synchronous risk of bi-basin water shortage, and the joint synchronous assurance of water transfer from the donor basin when there was a water shortage in the recipient basin exhibited a decreasing trend, but their changes were with a slight fluctuation, in

  5. Size-dependent ultrafast ionization dynamics of nanoscale samples in intense femtosecond x-ray free-electron-laser pulses.

    Science.gov (United States)

    Schorb, Sebastian; Rupp, Daniela; Swiggers, Michelle L; Coffee, Ryan N; Messerschmidt, Marc; Williams, Garth; Bozek, John D; Wada, Shin-Ichi; Kornilov, Oleg; Möller, Thomas; Bostedt, Christoph

    2012-06-08

    All matter exposed to intense femtosecond x-ray pulses from the Linac Coherent Light Source free-electron laser is strongly ionized on time scales competing with the inner-shell vacancy lifetimes. We show that for nanoscale objects the environment, i.e., nanoparticle size, is an important parameter for the time-dependent ionization dynamics. The Auger lifetimes of large Ar clusters are found to be increased compared to small clusters and isolated atoms, due to delocalization of the valence electrons in the x-ray-induced nanoplasma. As a consequence, large nanometer-sized samples absorb intense femtosecond x-ray pulses less efficiently than small ones.

  6. Quantification and size characterisation of silver nanoparticles in environmental aqueous samples and consumer products by single particle-ICPMS.

    Science.gov (United States)

    Aznar, Ramón; Barahona, Francisco; Geiss, Otmar; Ponti, Jessica; José Luis, Tadeo; Barrero-Moreno, Josefa

    2017-12-01

    Single particle-inductively coupled plasma mass spectrometry (SP-ICPMS) is a promising technique able to generate the number based-particle size distribution (PSD) of nanoparticles (NPs) in aqueous suspensions. However, SP-ICPMS analysis is not consolidated as routine-technique yet and is not typically applied to real test samples with unknown composition. This work presents a methodology to detect, quantify and characterise the number-based PSD of Ag-NPs in different environmental aqueous samples (drinking and lake waters), aqueous samples derived from migration tests and consumer products using SP-ICPMS. The procedure is built from a pragmatic view and involves the analysis of serial dilutions of the original sample until no variation in the measured size values is observed while keeping particle counts proportional to the dilution applied. After evaluation of the analytical figures of merit, the SP-ICPMS method exhibited excellent linearity (r2>0.999) in the range (1-25) × 104 particlesmL-1 for 30, 50 and 80nm nominal size Ag-NPs standards. The precision in terms of repeatability was studied according to the RSDs of the measured size and particle number concentration values and a t-test (p = 95%) at the two intermediate concentration levels was applied to determine the bias of SP-ICPMS size values compared to reference values. The method showed good repeatability and an overall acceptable bias in the studied concentration range. The experimental minimum detectable size for Ag-NPs ranged between 12 and 15nm. Additionally, results derived from direct SP-ICPMS analysis were compared to the results conducted for fractions collected by asymmetric flow-field flow fractionation and supernatant fractions after centrifugal filtration. The method has been successfully applied to determine the presence of Ag-NPs in: lake water; tap water; tap water filtered by a filter jar; seven different liquid silver-based consumer products; and migration solutions (pure water and

  7. Dosimetric verification by using the ArcCHECK system and 3DVH software for various target sizes.

    Directory of Open Access Journals (Sweden)

    Jin Ho Song

    Full Text Available To investigate the usefulness of the 3DVH software with an ArcCHECK 3D diode array detector in newly designed plans with various target sizes.The isocenter dose was measured with an ion-chamber and was compared with the planned and 3DVH predicted doses. The 2D gamma passing rates were evaluated at the diode level by using the ArcCHECK detector. The 3D gamma passing rates for specific regions of interest (ROIs were also evaluated by using the 3DVH software. Several dose-volume histograms (DVH-based predicted metrics for all structures were also obtained by using the 3DVH software.The isocenter dose deviation was <1% in all plans except in the case of a 1 cm target. Besides the gamma passing rate at the diode level, the 3D gamma passing rate for specific ROIs tended to decrease with increasing target size; this was more noticeable when a more stringent gamma criterion was applied. No correlation was found with the gamma passing rates and the DVH-based metrics especially in the ROI with high-dose gradients.Delivery quality assurance by using 3DVH and ArcCHECK can provide substantial information through a simple and easy approach, although the accuracy of this system should be judged cautiously.

  8. Sample size calculation based on generalized linear models for differential expression analysis in RNA-seq data.

    Science.gov (United States)

    Li, Chung-I; Shyr, Yu

    2016-12-01

    As RNA-seq rapidly develops and costs continually decrease, the quantity and frequency of samples being sequenced will grow exponentially. With proteomic investigations becoming more multivariate and quantitative, determining a study's optimal sample size is now a vital step in experimental design. Current methods for calculating a study's required sample size are mostly based on the hypothesis testing framework, which assumes each gene count can be modeled through Poisson or negative binomial distributions; however, these methods are limited when it comes to accommodating covariates. To address this limitation, we propose an estimating procedure based on the generalized linear model. This easy-to-use method constructs a representative exemplary dataset and estimates the conditional power, all without requiring complicated mathematical approximations or formulas. Even more attractive, the downstream analysis can be performed with current R/Bioconductor packages. To demonstrate the practicability and efficiency of this method, we apply it to three real-world studies, and introduce our on-line calculator developed to determine the optimal sample size for a RNA-seq study.

  9. A spectroscopic sample of massive, quiescent z ∼ 2 galaxies: implications for the evolution of the mass-size relation

    Energy Technology Data Exchange (ETDEWEB)

    Krogager, J.-K.; Zirm, A. W.; Toft, S.; Man, A. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen O (Denmark); Brammer, G. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21210 (United States)

    2014-12-10

    We present deep, near-infrared Hubble Space Telescope/Wide Field Camera 3 grism spectroscopy and imaging for a sample of 14 galaxies at z ≈ 2 selected from a mass-complete photometric catalog in the COSMOS field. By combining the grism observations with photometry in 30 bands, we derive accurate constraints on their redshifts, stellar masses, ages, dust extinction, and formation redshifts. We show that the slope and scatter of the z ∼ 2 mass-size relation of quiescent galaxies is consistent with the local relation, and confirm previous findings that the sizes for a given mass are smaller by a factor of two to three. Finally, we show that the observed evolution of the mass-size relation of quiescent galaxies between z = 2 and 0 can be explained by the quenching of increasingly larger star forming galaxies at a rate dictated by the increase in the number density of quiescent galaxies with decreasing redshift. However, we find that the scatter in the mass-size relation should increase in the quenching-driven scenario in contrast to what is seen in the data. This suggests that merging is not needed to explain the evolution of the median mass-size relation of massive galaxies, but may still be required to tighten its scatter, and explain the size growth of individual z = 2 galaxies quiescent galaxies.

  10. Design and Demonstration of a Material-Plasma Exposure Target Station for Neutron Irradiated Samples

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Juergen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aaron, A. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bell, Gary L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burgess, Thomas W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ellis, Ronald James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Giuliano, D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Howard, R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kiggans, James O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lessard, Timothy L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ohriner, Evan Keith [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Perkins, Dale E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Varma, Venugopal Koikal [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-10-20

    -state heat fluxes of 5–20 MW/m2 and ion fluxes up to 1024 m-2s-1. Since PFCs will have to withstand neutron irradiation displacement damage up to 50 dpa, the target station design must accommodate radioactive specimens (materials to be irradiated in HFIR or at SNS) to enable investigations of the impact of neutron damage on materials. Therefore, the system will have to be able to install and extract irradiated specimens using equipment and methods to avoid sample modification, control contamination, and minimize worker dose. Included in the design considerations will be an assessment of all the steps between neutron irradiation and post-exposure materials examination/characterization, as well as an evaluation of the facility hazard categorization. In particular, the factors associated with the acquisition of radioactive specimens and their preparation, transportation, experimental configuration at the plasma-specimen interface, post-plasma-exposure sample handling, and specimen preparation will be evaluated. Neutronics calculations to determine the dose rates of the samples were carried out for a large number of potential plasma-facing materials.

  11. Production Lot Sizing and Process Targeting under Process Deterioration and Machine Breakdown Conditions

    Directory of Open Access Journals (Sweden)

    Muhammad Al-Salamah

    2012-01-01

    Full Text Available The paper considers a production facility that might deteriorate suddenly at some point during the production run time; after deterioration, nonconforming items are produced in a greater rate compared to the rate before deterioration. Moreover, the production facility may ultimately break down; consequently, the production lot is aborted before completion. If breakdown happens, corrective action is started immediately; otherwise, the production lot is completed and preventive repair is implemented at the end of the production cycle to enhance system reliability. The mathematical model is formulated under general distributions of failure, corrective, and repair times, while the numerical examples are solved under exponential failure and uniform repair times. The formulated model successfully determines the optimal lot size in addition to the optimal process parameters (mean and standard deviation simultaneously.

  12. Scaling of impact-generated cavity-size for highly porous targets and its application to cometary surfaces

    Science.gov (United States)

    Okamoto, Takaya; Nakamura, Akiko M.

    2017-08-01

    Detailed images of highly porous small bodies show variety of the surface. One of the interesting findings is that the depressions on comets look shallower than the simple craters such as on the moon, that is the depth-to-diameter ratio of the depressions is smaller than ∼0.2. Although the mechanisms for the formation of the depression are controversial; such as collapse after the sublimation of the sub-surface volatile or activities after impact such as sublimation and viscous relaxation, the shape of the cavity formed on the highly-porous surface by impact itself has not been studied much. We performed impact experiments of sintered glass-bead targets with porosities of ∼94% and 87%, as well as gypsum targets with a porosity of ∼50%, and pumice targets with that of 74%. The cavity formed in the porous target by the impact has maximum diameter at some depth from the target surface. This type of cavity is called bulb-shape cavity. In addition to the results of this study, we also compiled the results of previous impact experiments for cavity sizes in which the targets with porosity larger than 30% were used. Then new empirical scaling relations for the maximum diameter and the bulb depth for the wide range of target porosity were obtained. We applied the relations to comets and showed that the surface strength and the particle size of the comet 9P/Tempel 1 are estimated to be of the orders of 101-103 Pa, and, with the assumption of ice grains consisted of monodisperse spheres, to be larger than ∼90 μm, respectively. The ratio of bulb depth to the maximum diameter on a comet derived from the extrapolation of scaling relations expects that the ratio on the weak surface with the strength less than 102 Pa was 0.10 or below, which is smaller than the depth-to-diameter ratio of simple craters, ∼0.2. It suggests a possibility that shallow depressions on comets could be formed only by impact without the need for subsequent activities, such as sublimation and

  13. Structural impact of armor monoblock dimensions on the failure behavior of ITER-type divertor target components: Size matters

    Energy Technology Data Exchange (ETDEWEB)

    Li, Muyuan; You, Jeong-Ha, E-mail: you@ipp.mpg.de

    2016-12-15

    Highlights: • Quantitative assessment of size effects was conducted numerically for W monoblock. • Decreasing the width of W monoblock leads to a lower risk of failure. • The Cu interlayer was not affected significantly by varying armor thickness. • The predicted trends were in line with the experimental observations. - Abstract: Plenty of high-heat-flux tests conducted on tungsten monoblock type divertor target mock-ups showed that the threshold heat flux density for cracking and fracture of tungsten armor seems to be related to the dimension of the monoblocks. Thus, quantitative assessment of such size effects is of practical importance for divertor target design. In this paper, a computational study about the thermal and structural impact of monoblock size on the plastic fatigue and fracture behavior of an ITER-type tungsten divertor target is reported. As dimensional parameters, the width and thickness of monoblock, the thickness of sacrificial armor, and the inner diameter of cooling tube were varied. Plastic fatigue lifetime was estimated for the loading surface of tungsten armor and the copper interlayer by use of a cyclic-plastic constitutive model. The driving force of brittle crack growth through the tungsten armor was assessed in terms of J-integral at the crack tip. Decrease of the monoblock width effectively reduced accumulation of plastic strain at the armor surface and the driving force of brittle cracking. Decrease of sacrificial armor thickness led to decrease of plastic deformation at the loading surface due to lower surface temperature, but the thermal and mechanical response of the copper interlayer was not affected by the variation of armor thickness. Monoblock with a smaller tube diameter but with the same armor thickness and shoulder thickness experienced lower fatigue load. The predicted trends were in line with the experimental observations.

  14. [Sample size for the estimation of F-wave parameters in healthy volunteers and amyotrophic lateral sclerosis patients].

    Science.gov (United States)

    Fang, J; Cui, L Y; Liu, M S; Guan, Y Z; Ding, Q Y; Du, H; Li, B H; Wu, S

    2017-03-07

    Objective: The study aimed to investigate whether sample sizes of F-wave study differed according to different nerves, different F-wave parameters, and amyotrophic lateral sclerosis(ALS) patients or healthy subjects. Methods: The F-waves in the median, ulnar, tibial, and deep peroneal nerves of 55 amyotrophic lateral sclerosis (ALS) patients and 52 healthy subjects were studied to assess the effect of sample size on the accuracy of measurements of the following F-wave parameters: F-wave minimum latency, maximum latency, mean latency, F-wave persistence, F-wave chronodispersion, mean and maximum F-wave amplitude. A hundred stimuli were used in F-wave study. The values obtained from 100 stimuli were considered "true" values and were compared with the corresponding values from smaller samples of 20, 40, 60 and 80 stimuli. F-wave parameters obtained from different sample sizes were compared between the ALS patients and the normal controls. Results: Significant differences were not detected with samples above 60 stimuli for chronodispersion in all four nerves in normal participants. Significant differences were not detected with samples above 40 stimuli for maximum F-wave amplitude in median, ulnar and tibial nerves in normal participants. When comparing ALS patients and normal controls, significant differences were detected in the maximum (median nerve, Z=-3.560, PF-wave latency (median nerve, Z=-3.243, PF-wave chronodispersion (Z=-3.152, PF-wave persistence in the median (Z=6.139, PF-wave amplitude in the tibial nerve(t=2.981, PF-wave amplitude in the ulnar (Z=-2.134, PF-wave persistence in tibial nerve (Z=2.119, PF-wave amplitude in ulnar (Z=-2.552, PF-wave amplitude in peroneal nerve (t=2.693, PF-wave study differed according to different nerves, different F-wave parameters , and ALS patients or healthy subjects.

  15. Radiation inactivation (target size analysis) of the gonadotropin-releasing hormone receptor: evidence for a high molecular weight complex

    Energy Technology Data Exchange (ETDEWEB)

    Conn, P.M.; Venter, J.C.

    1985-04-01

    In the present study we used radiation inactivation (target size analysis) to measure the functional mol wt of the GnRH receptor while it is still a component of the plasma membrane. This technique is based on the observation that an inverse relationship exists between the dose-dependent inactivation of a macromolecule by ionizing radiation and the size of that macromolecule. This method demonstrates a mol wt of 136,346 +/- 8120 for the GnRH receptor. This estimate is approximately twice that obtained (60,000) by photoaffinity labeling with a radioactive GnRH analog followed by electrophoresis under denaturing conditions and, accordingly, presents the possibility that the functional receptor consists of a high mol wt complex in its native state. The present studies indicate that the GnRH receptor is either a single weight class of protein or several closely related weight classes, such as might occur due to protein glycosylation.

  16. Sex determination by tooth size in a sample of Greek population.

    Science.gov (United States)

    Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C

    2014-08-01

    Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.

  17. Glucose transport carrier of human erythrocytes. Radiation-target size of glucose-sensitive cytochalasin B binding protein.

    Science.gov (United States)

    Jung, C Y; Hsu, T L; Hah, J S; Cha, C; Haas, M N

    1980-01-25

    Apparent molecular sizes of D-glucose-sensitive cytochalasin B binding protein of human erythrocyte membranes are assessed by applying classical target theory to irradiation-inactivation data. Molecular weights of this protein as it occurs in untreated ghosts, EDTA-treated ghosts, and reconstituted vesicles of Triton extract of ghosts are 220,000, 180,000, and 220,000, respectively. These results, in conjunction with other findings in the literature, suggest that the native form of the glucose transport carrier of human erythrocytes is a tetrameric assembly of a 50,000-dalton monomer or is a dimer of 100,000 daltons.

  18. Applicability of submerged jet model to describe the liquid sample load into measuring chamber of micron and submillimeter sizes

    Science.gov (United States)

    Bulyanitsa, A. L.; Belousov, K. I.; Evstrapov, A. A.

    2017-11-01

    The load of a liquid sample into a measuring chamber is one of the stages of substance analysis in modern devices. Fluid flow is effectively calculated by numerical simulation using application packages, for example, COMSOL MULTIPHYSICS. In the same time it is often desirable to have an approximate analytical solution. The applicability of a submerged jet model for simulation the liquid sample load is considered for the chamber with sizes from hundreds micrometers to several millimeters. The paper examines the extent to which the introduction of amendments to the jet cutting and its replacement with an energy equivalent jet provide acceptable accuracy for evaluation of the loading process dynamics.

  19. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

    Science.gov (United States)

    Fraley, R. Chris; Vazire, Simine

    2014-01-01

    The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

  20. The influence of sampling unit size and spatial arrangement patterns on neighborhood-based spatial structure analyses of forest stands

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.; Zhang, G.; Hui, G.; Li, Y.; Hu, Y.; Zhao, Z.

    2016-07-01

    Aim of study: Neighborhood-based stand spatial structure parameters can quantify and characterize forest spatial structure effectively. How these neighborhood-based structure parameters are influenced by the selection of different numbers of nearest-neighbor trees is unclear, and there is some disagreement in the literature regarding the appropriate number of nearest-neighbor trees to sample around reference trees. Understanding how to efficiently characterize forest structure is critical for forest management. Area of study: Multi-species uneven-aged forests of Northern China. Material and methods: We simulated stands with different spatial structural characteristics and systematically compared their structure parameters when two to eight neighboring trees were selected. Main results: Results showed that values of uniform angle index calculated in the same stand were different with different sizes of structure unit. When tree species and sizes were completely randomly interspersed, different numbers of neighbors had little influence on mingling and dominance indices. Changes of mingling or dominance indices caused by different numbers of neighbors occurred when the tree species or size classes were not randomly interspersed and their changing characteristics can be detected according to the spatial arrangement patterns of tree species and sizes. Research highlights: The number of neighboring trees selected for analyzing stand spatial structure parameters should be fixed. We proposed that the four-tree structure unit is the best compromise between sampling accuracy and costs for practical forest management. (Author)

  1. Size does matter: why polyploid tumor cells are critical drug targets in the war on cancer.

    Directory of Open Access Journals (Sweden)

    Angus eHarding

    2014-05-01

    Full Text Available Tumor evolution presents a formidable obstacle that currently prevents the development of truly curative treatments for cancer. In this perspective, we advocate for the hypothesis that tumor cells with significantly elevated genomic content (polyploid tumor cells facilitate rapid tumor evolution and the acquisition of therapy resistance in multiple incurable cancers. We appeal to studies conducted in yeast, cancer models and cancer patients, which all converge on the hypothesis that polyploidy enables large phenotypic leaps, providing access to many different therapy-resistant phenotypes. We develop a flow-cytometry based method for quantifying the prevalence of polyploid tumor cells, and show the frequency of these cells in patient tumors may be higher than is generally appreciated. We then present recent studies identifying promising new therapeutic strategies that could be used to specifically target polyploid tumor cells in cancer patients. We argue that these therapeutic approaches should be incorporated into new treatment strategies aimed at blocking tumor evolution by killing the highly evolvable, therapy resistant polyploid cell subpopulations, thus helping to maintain patient tumors in a drug sensitive state.

  2. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  3. Determining optimal sample sizes for multistage adaptive randomized clinical trials from an industry perspective using value of information methods.

    Science.gov (United States)

    Chen, Maggie H; Willan, Andrew R

    2013-02-01

    Most often, sample size determinations for randomized clinical trials are based on frequentist approaches that depend on somewhat arbitrarily chosen factors, such as type I and II error probabilities and the smallest clinically important difference. As an alternative, many authors have proposed decision-theoretic (full Bayesian) approaches, often referred to as value of information methods that attempt to determine the sample size that maximizes the difference between the trial's expected utility and its expected cost, referred to as the expected net gain. Taking an industry perspective, Willan proposes a solution in which the trial's utility is the increase in expected profit. Furthermore, Willan and Kowgier, taking a societal perspective, show that multistage designs can increase expected net gain. The purpose of this article is to determine the optimal sample size using value of information methods for industry-based, multistage adaptive randomized clinical trials, and to demonstrate the increase in expected net gain realized. At the end of each stage, the trial's sponsor must decide between three actions: continue to the next stage, stop the trial and seek regulatory approval, or stop the trial and abandon the drug. A model for expected total profit is proposed that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, and the relationship between trial results and probability of regulatory approval. The proposed method is extended to include multistage designs with a solution provided for a two-stage design. An example is given. Significant increases in the expected net gain are realized by using multistage designs. The complexity of the solutions increases with the number of stages, although far simpler near-optimal solutions exist. The method relies on the central limit theorem, assuming that the sample size is sufficiently large so that the relevant statistics are normally distributed. From a value of

  4. RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes

    Directory of Open Access Journals (Sweden)

    Danny J. Kelly

    2005-01-01

    Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.

  5. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  6. Fixed and Adaptive Parallel Subgroup-Specific Design for Survival Outcomes: Power and Sample Size

    Directory of Open Access Journals (Sweden)

    Miranta Antoniou

    2017-12-01

    Full Text Available Biomarker-guided clinical trial designs, which focus on testing the effectiveness of a biomarker-guided approach to treatment in improving patient health, have drawn considerable attention in the era of stratified medicine with many different designs being proposed in the literature. However, planning such trials to ensure they have sufficient power to test the relevant hypotheses can be challenging and the literature often lacks guidance in this regard. In this study, we focus on the parallel subgroup-specific design, which allows the evaluation of separate treatment effects in the biomarker-positive subgroup and biomarker-negative subgroup simultaneously. We also explore an adaptive version of the design, where an interim analysis is undertaken based on a fixed percentage of target events, with the option to stop each biomarker-defined subgroup early for futility or efficacy. We calculate the number of events and patients required to ensure sufficient power in each of the biomarker-defined subgroups under different scenarios when the primary outcome is time-to-event. For the adaptive version, stopping probabilities are also explored. Since multiple hypotheses are being tested simultaneously, and multiple interim analyses are undertaken, we also focus on controlling the overall type I error rate by way of multiplicity adjustment.

  7. Distribution of human waste samples in relation to sizing waste processing in space

    Science.gov (United States)

    Parker, Dick; Gallagher, S. K.

    1992-01-01

    Human waste processing for closed ecological life support systems (CELSS) in space requires that there be an accurate knowledge of the quantity of wastes produced. Because initial CELSS will be handling relatively few individuals, it is important to know the variation that exists in the production of wastes rather than relying upon mean values that could result in undersizing equipment for a specific crew. On the other hand, because of the costs of orbiting equipment, it is important to design the equipment with a minimum of excess capacity because of the weight that extra capacity represents. A considerable quantity of information that had been independently gathered on waste production was examined in order to obtain estimates of equipment sizing requirements for handling waste loads from crews of 2 to 20 individuals. The recommended design for a crew of 8 should hold 34.5 liters per day (4315 ml/person/day) for urine and stool water and a little more than 1.25 kg per day (154 g/person/day) of human waste solids and sanitary supplies.

  8. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  9. Dealing with large sample sizes: comparison of a new one spot dot blot method to western blot.

    Science.gov (United States)

    Putra, Sulistyo Emantoko Dwi; Tsuprykov, Oleg; Von Websky, Karoline; Ritter, Teresa; Reichetzeder, Christoph; Hocher, Berthold

    2014-01-01

    Western blot is the gold standard method to determine individual protein expression levels. However, western blot is technically difficult to perform in large sample sizes because it is a time consuming and labor intensive process. Dot blot is often used instead when dealing with large sample sizes, but the main disadvantage of the existing dot blot techniques, is the absence of signal normalization to a housekeeping protein. In this study we established a one dot two development signals (ODTDS) dot blot method employing two different signal development systems. The first signal from the protein of interest was detected by horseradish peroxidase (HRP). The second signal, detecting the housekeeping protein, was obtained by using alkaline phosphatase (AP). Inter-assay results variations within ODTDS dot blot and western blot and intra-assay variations between both methods were low (1.04-5.71%) as assessed by coefficient of variation. ODTDS dot blot technique can be used instead of western blot when dealing with large sample sizes without a reduction in results accuracy.

  10. An augmented probit model for missing predictable covariates in quantal bioassay with small sample size.

    Science.gov (United States)

    Follmann, Dean; Nason, Martha

    2011-09-01

    Quantal bioassay experiments relate the amount or potency of some compound; for example, poison, antibody, or drug to a binary outcome such as death or infection in animals. For infectious diseases, probit regression is commonly used for inference and a key measure of potency is given by the ID(P) , the amount that results in P% of the animals being infected. In some experiments, a validation set may be used where both direct and proxy measures of the dose are available on a subset of animals with the proxy being available on all. The proxy variable can be viewed as a messy reflection of the direct variable, leading to an errors-in-variables problem. We develop a model for the validation set and use a constrained seemingly unrelated regression (SUR) model to obtain the distribution of the direct measure conditional on the proxy. We use the conditional distribution to derive a pseudo-likelihood based on probit regression and use the parametric bootstrap for statistical inference. We re-evaluate an old experiment in 21 monkeys where neutralizing antibodies (nABs) to HIV were measured using an old (proxy) assay in all monkeys and with a new (direct) assay in a validation set of 11 who had sufficient stored plasma. Using our methods, we obtain an estimate of the ID(1) for the new assay, an important target for HIV vaccine candidates. In simulations, we compare the pseudo-likelihood estimates with regression calibration and a full joint likelihood approach. © 2011, The International Biometric Society No claim to original US Federal works.

  11. Effects of dislocation density and sample-size on plastic yielding at the nanoscale: a Weibull-like framework.

    Science.gov (United States)

    Rinaldi, Antonio

    2011-11-01

    Micro-compression tests have demonstrated that plastic yielding in nanoscale pillars is the result of the fine interplay between the sample-size (chiefly the diameter D) and the density of bulk dislocations ρ. The power-law scaling typical of the nanoscale stems from a source-limited regime, which depends on both these sample parameters. Based on the experimental and theoretical results available in the literature, this paper offers a perspective about the joint effect of D and ρ on the yield stress in any plastic regime, promoting also a schematic graphical map of it. In the sample-size dependent regime, such dependence is cast mathematically into a first order Weibull-type theory, where the power-law scaling the power exponent β and the modulus m of an approximate (unimodal) Weibull distribution of source-strengths can be related by a simple inverse proportionality. As a corollary, the scaling exponent β may not be a universal number, as speculated in the literature. In this context, the discussion opens the alternative possibility of more general (multimodal) source-strength distributions, which could produce more complex and realistic strengthening patterns than the single power-law usually assumed. The paper re-examines our own experimental data, as well as results of Bei et al. (2008) on Mo-alloy pillars, especially for the sake of emphasizing the significance of a sudden increase in sample response scatter as a warning signal of an incipient source-limited regime.

  12. The design of high-temperature thermal conductivity measurements apparatus for thin sample size

    Directory of Open Access Journals (Sweden)

    Hadi Syamsul

    2017-01-01

    Full Text Available This study presents the designing, constructing and validating processes of thermal conductivity apparatus using steady-state heat-transfer techniques with the capability of testing a material at high temperatures. This design is an improvement from ASTM D5470 standard where meter-bars with the equal cross-sectional area were used to extrapolate surface temperature and measure heat transfer across a sample. There were two meter-bars in apparatus where each was placed three thermocouples. This Apparatus using a heater with a power of 1,000 watts, and cooling water to stable condition. The pressure applied was 3.4 MPa at the cross-sectional area of 113.09 mm2 meter-bar and thermal grease to minimized interfacial thermal contact resistance. To determine the performance, the validating process proceeded by comparing the results with thermal conductivity obtained by THB 500 made by LINSEIS. The tests showed the thermal conductivity of the stainless steel and bronze are 15.28 Wm-1K-1 and 38.01 Wm-1K-1 with a difference of test apparatus THB 500 are −2.55% and 2.49%. Furthermore, this apparatus has the capability to measure the thermal conductivity of the material to a temperature of 400°C where the results for the thermal conductivity of stainless steel is 19.21 Wm-1K-1 and the difference was 7.93%.

  13. Sample size affects 13C-18O clumping in CO2 derived from phosphoric acid digestion of carbonates

    Science.gov (United States)

    Wacker, U.; Fiebig, J.

    2011-12-01

    In the recent past, clumped isotope analysis of carbonates has become an important tool for terrestrial and marine paleoclimate reconstructions. For this purpose, 47/44 ratios of CO2 derived from phosphoric acid digestion of carbonates are measured. These values are compared to the corresponding stochastic 47/44 distribution ratios computed from determined δ13C and δ18O values, with the deviation being finally expressed as Δ47. For carbonates precipitated in equilibrium with its parental water, the magnitude of Δ47 is a function of temperature only. This technique bases on the fact that the isotopic fractionation associated with phosphoric acid digestion of carbonates is kinetically controlled. In this way, the concentration of 13C-18O bondings in the evolved CO2 remains proportional to the number of corresponding bondings inside the carbonate lattice. A relationship between carbonate growth temperature and Δ47 has recently been determined experimentally by Ghosh et al. (2006)1, who performed the carbonate digestion with 103% H3PO4 at 25°C after precipitating the carbonates inorganically at temperatures ranging from 1-50°C. In order to investigate the kinetic parameters associated with the phosphoric acid digestion reaction at 25°C, we have analyzed several natural carbonates at varying sample sizes. Amongst these are NBS 19, internal Carrara marbel, Arctica islandica and cold seep carbonates. Sample size was varied between 4 and 12mg. All samples exhibit a systematic trend to increasing Δ47 values with decreasing sample size, with absolute variations being restricted to ≤0.10%. Additional tests imply that this effect is related to the phosphoric acid digestion reaction. Most presumably, either the kinetic fractionation factor expressing the differences in 47/44 ratios between evolved CO2 and parental carbonate slightly depends on the concentration of the digested carbonate or traces of water exchange with C-O-bearing species inside the acid, similar to

  14. Assessment of minimum sample sizes required to adequately represent diversity reveals inadequacies in datasets of domestic dog mitochondrial DNA.

    Science.gov (United States)

    Webb, Kristen; Allard, Marc

    2010-02-01

    Evolutionary and forensic studies commonly choose the mitochondrial control region as the locus for which to evaluate the domestic dog. However, the number of dogs that need to be sampled in order to represent the control region variation present in the worldwide population is yet to be determined. Following the methods of Pereira et al. (2004), we have demonstrated the importance of surveying the complete control region rather than only the popular left domain. We have also evaluated sample saturation in terms of the haplotype number and the number of polymorphisms within the control region. Of the most commonly cited evolutionary research, only a single study has adequately surveyed the domestic dog population, while all forensic studies have failed to meet the minimum values. We recommend that future studies consider dataset size when designing experiments and ideally sample both domains of the control region in an appropriate number of domestic dogs.

  15. How taxonomic diversity, community structure, and sample size determine the reliability of higher taxon surrogates.

    Science.gov (United States)

    Neeson, Thomas M; Van Rijn, Itai; Mandelik, Yael

    2013-07-01

    Ecologists and paleontologists often rely on higher taxon surrogates instead of complete inventories of biological diversity. Despite their intrinsic appeal, the performance of these surrogates has been markedly inconsistent across empirical studies, to the extent that there is no consensus on appropriate taxonomic resolution (i.e., whether genus- or family-level categories are more appropriate) or their overall usefulness. A framework linking the reliability of higher taxon surrogates to biogeographic setting would allow for the interpretation of previously published work and provide some needed guidance regarding the actual application of these surrogates in biodiversity assessments, conservation planning, and the interpretation of the fossil record. We developed a mathematical model to show how taxonomic diversity, community structure, and sampling effort together affect three measures of higher taxon performance: the correlation between species and higher taxon richness, the relative shapes and asymptotes of species and higher taxon accumulation curves, and the efficiency of higher taxa in a complementarity-based reserve-selection algorithm. In our model, higher taxon surrogates performed well in communities in which a few common species were most abundant, and less well in communities with many equally abundant species. Furthermore, higher taxon surrogates performed well when there was a small mean and variance in the number of species per higher taxa. We also show that empirically measured species-higher-taxon correlations can be partly spurious (i.e., a mathematical artifact), except when the species accumulation curve has reached an asymptote. This particular result is of considerable practical interest given the widespread use of rapid survey methods in biodiversity assessment and the application of higher taxon methods to taxa in which species accumulation curves rarely reach an asymptote, e.g., insects.

  16. Low-Z polymer sample supports for fixed-target serial femtosecond X-ray crystallography

    Energy Technology Data Exchange (ETDEWEB)

    Feld, Geoffrey K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); National Institute of Environmental Health Science, Research Triangle Park, NC (United States); Heymann, Michael [Brandeis Univ., Waltham, MA (United States); Univ. of Hamburg and DESY, Hamburg (Germany); Benner, W. Henry [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pardini, Tommaso [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tsai, Ching -Ju [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Boutet, Sebastien [SLAC National Accelerator Lab., Menlo Park, CA (United States); Coleman, Matthew A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hunter, Mark S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Li, Xiaodan [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Messerschmidt, Marc [SLAC National Accelerator Lab., Menlo Park, CA (United States); BioXFEL Science and Technology Center, Buffalo, NY (United States); Opathalage, Achini [Brandeis Univ., Waltham, MA (United States); Pedrini, Bill [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Williams, Garth J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Krantz, Bryan A. [Univ. of California, Berkeley, CA (United States); Fraden, Seth [Brandeis Univ., Waltham, MA (United States); Hau-Riege, Stefan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Evans, James E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Segelke, Brent W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Frank, Matthias [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-06-27

    X-ray free-electron lasers (XFELs) offer a new avenue to the structural probing of complex materials, including biomolecules. Delivery of precious sample to the XFEL beam is a key consideration, as the sample of interest must be serially replaced after each destructive pulse. The fixed-target approach to sample delivery involves depositing samples on a thin-film support and subsequent serial introduction via a translating stage. Some classes of biological materials, including two-dimensional protein crystals, must be introduced on fixed-target supports, as they require a flat surface to prevent sample wrinkling. A series of wafer and transmission electron microscopy (TEM)-style grid supports constructed of low-Z plastic have been custom-designed and produced. Aluminium TEM grid holders were engineered, capable of delivering up to 20 different conventional or plastic TEM grids using fixed-target stages available at the Linac Coherent Light Source (LCLS). As proof-of-principle, X-ray diffraction has been demonstrated from two-dimensional crystals of bacteriorhodopsin and three-dimensional crystals of anthrax toxin protective antigen mounted on these supports at the LCLS. In conclusion, the benefits and limitations of these low-Z fixed-target supports are discussed; it is the authors' belief that they represent a viable and efficient alternative to previously reported fixed-target supports for conducting diffraction studies with XFELs.

  17. Prediction errors in learning drug response from gene expression data - influence of labeling, sample size, and machine learning algorithm.

    Directory of Open Access Journals (Sweden)

    Immanuel Bayer

    Full Text Available Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50 of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment.

  18. Influence of pH, Temperature and Sample Size on Natural and Enforced Syneresis of Precipitated Silica

    Directory of Open Access Journals (Sweden)

    Sebastian Wilhelm

    2015-12-01

    Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.

  19. Peer groups splitting in Croatian EQA scheme: a trade-off between homogeneity and sample size number.

    Science.gov (United States)

    Vlašić Tanasković, Jelena; Coucke, Wim; Leniček Krleža, Jasna; Vuković Rodriguez, Jadranka

    2017-03-01

    Laboratory evaluation through external quality assessment (EQA) schemes is often performed as 'peer group' comparison under the assumption that matrix effects influence the comparisons between results of different methods, for analytes where no commutable materials with reference value assignment are available. With EQA schemes that are not large but have many available instruments and reagent options for same analyte, homogenous peer groups must be created with adequate number of results to enable satisfactory statistical evaluation. We proposed a multivariate analysis of variance (MANOVA)-based test to evaluate heterogeneity of peer groups within the Croatian EQA biochemistry scheme and identify groups where further splitting might improve laboratory evaluation. EQA biochemistry results were divided according to instruments used per analyte and the MANOVA test was used to verify statistically significant differences between subgroups. The number of samples was determined by sample size calculation ensuring a power of 90% and allowing the false flagging rate to increase not more than 5%. When statistically significant differences between subgroups were found, clear improvement of laboratory evaluation was assessed before splitting groups. After evaluating 29 peer groups, we found strong evidence for further splitting of six groups. Overall improvement of 6% reported results were observed, with the percentage being as high as 27.4% for one particular method. Defining maximal allowable differences between subgroups based on flagging rate change, followed by sample size planning and MANOVA, identifies heterogeneous peer groups where further splitting improves laboratory evaluation and enables continuous monitoring for peer group heterogeneity within EQA schemes.

  20. Spatial Distribution and Minimum Sample Size for Overwintering Larvae of the Rice Stem Borer Chilo suppressalis (Walker) in Paddy Fields.

    Science.gov (United States)

    Arbab, A

    2014-10-01

    The rice stem borer, Chilo suppressalis (Walker), feeds almost exclusively in paddy fields in most regions of the world. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling procedures, and adopting precise agricultural techniques. Field experiments were conducted during 2011 and 2012 to estimate the spatial distribution pattern of the overwintering larvae. Data were analyzed using five distribution indices and two regression models (Taylor and Iwao). All of the indices and Taylor's model indicated random spatial distribution pattern of the rice stem borer overwintering larvae. Iwao's patchiness regression was inappropriate for our data as shown by the non-homogeneity of variance, whereas Taylor's power law fitted the data well. The coefficients of Taylor's power law for a combined 2 years of data were a = -0.1118, b = 0.9202 ± 0.02, and r (2) = 96.81. Taylor's power law parameters were used to compute minimum sample size needed to estimate populations at three fixed precision levels, 5, 10, and 25% at 0.05 probabilities. Results based on this equation parameters suggesting that minimum sample sizes needed for a precision level of 0.25 were 74 and 20 rice stubble for rice stem borer larvae when the average larvae is near 0.10 and 0.20 larvae per rice stubble, respectively.

  1. Endocranial volume of Australopithecus africanus: new CT-based estimates and the effects of missing data and small sample size.

    Science.gov (United States)

    Neubauer, Simon; Gunz, Philipp; Weber, Gerhard W; Hublin, Jean-Jacques

    2012-04-01

    Estimation of endocranial volume in Australopithecus africanus is important in interpreting early hominin brain evolution. However, the number of individuals available for investigation is limited and most of these fossils are, to some degree, incomplete and/or distorted. Uncertainties of the required reconstruction ('missing data uncertainty') and the small sample size ('small sample uncertainty') both potentially bias estimates of the average and within-group variation of endocranial volume in A. africanus. We used CT scans, electronic preparation (segmentation), mirror-imaging and semilandmark-based geometric morphometrics to generate and reconstruct complete endocasts for Sts 5, Sts 60, Sts 71, StW 505, MLD 37/38, and Taung, and measured their endocranial volumes (EV). To get a sense of the reliability of these new EV estimates, we then used simulations based on samples of chimpanzees and humans to: (a) test the accuracy of our approach, (b) assess missing data uncertainty, and (c) appraise small sample uncertainty. Incorporating missing data uncertainty of the five adult individuals, A. africanus was found to have an average adult endocranial volume of 454-461 ml with a standard deviation of 66-75 ml. EV estimates for the juvenile Taung individual range from 402 to 407 ml. Our simulations show that missing data uncertainty is small given the missing portions of the investigated fossils, but that small sample sizes are problematic for estimating species average EV. It is important to take these uncertainties into account when different fossil groups are being compared. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Arecibo Radar Observation of Near-Earth Asteroids: Expanded Sample Size, Determination of Radar Albedos, and Measurements of Polarization Ratios

    Science.gov (United States)

    Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.

    2017-10-01

    The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we

  3. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Science.gov (United States)

    Vasiliu, Daniel; Clamons, Samuel; McDonough, Molly; Rabe, Brian; Saha, Margaret

    2015-01-01

    Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED). Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  4. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  5. Hierarchical distance-sampling models to estimate population size and habitat-specific abundance of an island endemic.

    Science.gov (United States)

    Sillett, T Scott; Chandler, Richard B; Royle, J Andrew; Kery, Marc; Morrison, Scott A

    2012-10-01

    Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural

  6. Reconstruction of enhancer-target networks in 935 samples of human primary cells, tissues and cell lines.

    Science.gov (United States)

    Cao, Qin; Anyansi, Christine; Hu, Xihao; Xu, Liangliang; Xiong, Lei; Tang, Wenshu; Mok, Myth T S; Cheng, Chao; Fan, Xiaodan; Gerstein, Mark; Cheng, Alfred S L; Yip, Kevin Y

    2017-10-01

    We propose a new method for determining the target genes of transcriptional enhancers in specific cells and tissues. It combines global trends across many samples and sample-specific information, and considers the joint effect of multiple enhancers. Our method outperforms existing methods when predicting the target genes of enhancers in unseen samples, as evaluated by independent experimental data. Requiring few types of input data, we are able to apply our method to reconstruct the enhancer-target networks in 935 samples of human primary cells, tissues and cell lines, which constitute by far the largest set of enhancer-target networks. The similarity of these networks from different samples closely follows their cell and tissue lineages. We discover three major co-regulation modes of enhancers and find defense-related genes often simultaneously regulated by multiple enhancers bound by different transcription factors. We also identify differentially methylated enhancers in hepatocellular carcinoma (HCC) and experimentally confirm their altered regulation of HCC-related genes.

  7. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Science.gov (United States)

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  8. Toxicity of CdTe QDs with different sizes targeted to HSA investigated by two electrochemical methods.

    Science.gov (United States)

    Xu, Zi-Qiang; Lai, Lu; Li, Dong-Wei; Li, Ran; Xiang, Chen; Jiang, Feng-Lei; Sun, Shao-Fa; Liu, Yi

    2013-02-01

    QDs have large scale application in many important areas with potential of unintentional exposure to the environment or organism during processing of a nanotechnology containing product's life cycle. In this paper, two classical electrochemical methods, cyclic voltammetry and electrochemical impedance spectroscopy were applied to investigate the influence of particle sizes of CdTe QDs on their toxicity targeted to human serum albumin (HSA) under simulative physiological conditions. The results show that the toxicity of yellow emitting QDs (YQDs) on HSA is slightly stronger than that of the green-emitting (GQDs) and red-emitting QDs (RQDs). We also compared these two classical electrochemical methods with the traditional fluorescence spectroscopy through the above results. The electrochemical methods may be more accurate and comprehensive to investigate the toxicity of QDs at the biomacromolecular level under certain conditions, though fluorescence spectroscopy is simpler and more sensitive.

  9. Dosimetric verification by using the ArcCHECK system and 3DVH software for various target sizes.

    Science.gov (United States)

    Song, Jin Ho; Shin, Hun-Joo; Kay, Chul Seung; Son, Seok Hyun

    2015-01-01

    To investigate the usefulness of the 3DVH software with an ArcCHECK 3D diode array detector in newly designed plans with various target sizes. The isocenter dose was measured with an ion-chamber and was compared with the planned and 3DVH predicted doses. The 2D gamma passing rates were evaluated at the diode level by using the ArcCHECK detector. The 3D gamma passing rates for specific regions of interest (ROIs) were also evaluated by using the 3DVH software. Several dose-volume histograms (DVH)-based predicted metrics for all structures were also obtained by using the 3DVH software. The isocenter dose deviation was metrics especially in the ROI with high-dose gradients. Delivery quality assurance by using 3DVH and ArcCHECK can provide substantial information through a simple and easy approach, although the accuracy of this system should be judged cautiously.

  10. Strategies on Sample Size Determination and Qualitative and Quantitative Traits Integration to Construct Core Collection of Rice (Oryza sativa

    Directory of Open Access Journals (Sweden)

    Xiao-ling LI

    2011-03-01

    Full Text Available The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management. Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections. The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions. The results showed that 50–225 nested core collections, whose sampling rate was 2.2%–9.9%, were sufficient to maintain the maximum genetic diversity of the initial collections. Of these, 150 accessions (6.6% could capture the maximal genetic diversity of the initial collection. Three data types, i.e. qualitative traits (QT1, quantitative traits (QT2 and integrated qualitative and quantitative traits (QTT, were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances. Every combining scheme constructed eight rice core collections (225, 200, 175, 150, 125, 100, 75 and 50. The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections. A core collection constructed only on the information of QT1 could not represent the initial collection effectively. QTT should be used together to construct a productive core collection.

  11. Targeted sampling of cementum for recovery of nuclear DNA from human teeth and the impact of common decontamination measures

    OpenAIRE

    Higgins, Denice; Kaidonis, John; Townsend, Grant; Hughes, Toby; Austin, Jeremy J.

    2013-01-01

    Background Teeth are a valuable source of DNA for identification of fragmented and degraded human remains. While the value of dental pulp as a source of DNA is well established, the quantity and presentation of DNA in the hard dental tissues has not been extensively studied. Without this knowledge common decontamination, sampling and DNA extraction techniques may be suboptimal. Targeted sampling of specific dental tissues could maximise DNA profiling success, while minimising the need for lab...

  12. Target-of-rapamycin complex 1 (Torc1) signaling modulates cilia size and function through protein synthesis regulation

    Science.gov (United States)

    Yuan, Shiaulou; Li, Jade; Diener, Dennis R.; Choma, Michael A.; Rosenbaum, Joel L.; Sun, Zhaoxia

    2012-01-01

    The cilium serves as a cellular antenna by coordinating upstream environmental cues with numerous downstream signaling processes that are indispensable for the function of the cell. This role is supported by the revelation that defects of the cilium underlie an emerging class of human disorders, termed “ciliopathies.” Although mounting interest in the cilium has demonstrated the essential role that the organelle plays in vertebrate development, homeostasis, and disease pathogenesis, the mechanisms regulating cilia morphology and function remain unclear. Here, we show that the target-of-rapamycin (TOR) growth pathway modulates cilia size and function during zebrafish development. Knockdown of tuberous sclerosis complex 1a (tsc1a), which encodes an upstream inhibitor of TOR complex 1 (Torc1), increases cilia length. In contrast, treatment of embryos with rapamycin, an inhibitor of Torc1, shortens cilia length. Overexpression of ribosomal protein S6 kinase 1 (S6k1), which encodes a downstream substrate of Torc1, lengthens cilia. Furthermore, we provide evidence that TOR-mediated cilia assembly is evolutionarily conserved and that protein synthesis is essential for this regulation. Finally, we demonstrate that TOR signaling and cilia length are pivotal for a variety of downstream ciliary functions, such as cilia motility, fluid flow generation, and the establishment of left-right body asymmetry. Our findings reveal a unique role for the TOR pathway in regulating cilia size through protein synthesis and suggest that appropriate and defined lengths are necessary for proper function of the cilium. PMID:22308353

  13. Autofluorescence-Free Targeted Tumor Imaging Based on Luminous Nanoparticles with Composition-Dependent Size and Persistent Luminescence.

    Science.gov (United States)

    Wang, Jie; Ma, Qinqin; Hu, Xiao-Xiao; Liu, Haoyang; Zheng, Wei; Chen, Xueyuan; Yuan, Quan; Tan, Weihong

    2017-08-22

    Optical bioimaging is an indispensable tool in modern biology and medicine, but the technique is susceptible to autofluorescence interference. Persistent nanophosphors provide an easy-to-perform and highly efficient means to eliminate tissue autofluorescence. However, direct synthesis of persistent nanophosphors with tunable properties to meet different bioimaging requirements remains largely unexplored. In this work, zinc gallogermanate (Zn1+xGa2-2xGexO4:Cr, 0 ≤ x ≤ 0.5, ZGGO:Cr) persistent luminescence nanoparticles with composition-dependent size and persistent luminescence are reported. The size of the ZGGO:Cr nanoparticles gradually increases with the increase of x in the chemical formula. Moreover, the intensity and decay time of persistent luminescence in ZGGO:Cr nanoparticles can also be fine-tuned by simply changing x in the formula. In vivo bioimaging tests demonstrate that ZGGO:Cr nanoparticles can efficiently eliminate tissue autofluorescence, and the nanoparticles also show good promise in long-term bioimaging as they can be easily reactivated in vivo. Furthermore, an aptamer-guided ZGGO:Cr bioprobe is constructed, and it displays excellent tumor-specific accumulation. The ZGGO:Cr nanoparticles are ideal for autofluorescence-free targeted bioimaging, indicating their great potential in monitoring cellular networks and construction of guiding systems for surgery.

  14. Organic composition of size segregated atmospheric particulate matter, during summer and winter sampling campaigns at representative sites in Madrid, Spain

    Science.gov (United States)

    Mirante, Fátima; Alves, Célia; Pio, Casimiro; Pindado, Oscar; Perez, Rosa; Revuelta, M.a. Aranzazu; Artiñano, Begoña

    2013-10-01

    Madrid, the largest city of Spain, has some and unique air pollution problems, such as emissions from residential coal burning, a huge vehicle fleet and frequent African dust outbreaks, along with the lack of industrial emissions. The chemical composition of particulate matter (PM) was studied during summer and winter sampling campaigns, conducted in order to obtain size-segregated information at two different urban sites (roadside and urban background). PM was sampled with high volume cascade impactors, with 4 stages: 10-2.5, 2.5-1, 1-0.5 and alcohols and fatty acids were chromatographically resolved. The PM1-2.5 was the fraction with the highest mass percentage of organics. Acids were the organic compounds that dominated all particle size fractions. Different organic compounds presented apparently different seasonal characteristics, reflecting distinct emission sources, such as vehicle exhausts and biogenic sources. The benzo[a]pyrene equivalent concentrations were lower than 1 ng m- 3. The estimated carcinogenic risk is low.

  15. MRI derived brain atrophy in PSP and MSA-P. Determining sample size to detect treatment effects.

    Science.gov (United States)

    Paviour, Dominic C; Price, Shona L; Lees, Andrew J; Fox, Nick C

    2007-04-01

    Progressive supranuclear palsy (PSP) and multiple system (MSA) atrophy are associated with progressive brain atrophy. Serial MRI can be applied in order to measure this change in brain volume and to calculate atrophy rates. We evaluated MRI derived whole brain and regional atrophy rates as potential markers of progression in PSP and the Parkinsonian variant of multiple system atrophy (MSA-P). 17 patients with PSP, 9 with MSA-P and 18 healthy controls underwent two MRI brain scans. MRI scans were registered, and brain and regional atrophy rates (midbrain, pons, cerebellum, third and lateral ventricles) measured. Sample sizes required to detect the effect of a proposed disease-modifying treatment were estimated. The effect of scan interval on the variance of the atrophy rates and sample size was assessed. Based on the calculated yearly rates of atrophy, for a drug effect equivalent to a 30% reduction in atrophy, fewer PSP subjects are required in each treatment arm when using midbrain rather than whole brain atrophy rates (183 cf. 499). Fewer MSA-P subjects are required, using pontine/cerebellar, rather than whole brain atrophy rates (164/129 cf. 794). A reduction in the variance of measured atrophy rates was observed with a longer scan interval. Regional rather than whole brain atrophy rates calculated from volumetric serial MRI brain scans in PSP and MSA-P provide a more practical and powerful means of monitoring disease progression in clinical trials.

  16. The effects of particle size and molecular targeting on the intratumoral and subcellular distribution of polymeric nanoparticles.

    Science.gov (United States)

    Lee, Helen; Fonge, Humphrey; Hoang, Bryan; Reilly, Raymond M; Allen, Christine

    2010-08-02

    The current study describes the impact of particle size and/or molecular targeting (epidermal growth factor, EGF) on the in vivo transport of block copolymer micelles (BCMs) in athymic mice bearing human breast cancer xenografts that express differential levels of EGF receptors (EGFR). BCMs with diameters of 25 nm (BCM-25) and 60 nm (BCM-60) were labeled with indium-111 ((111)In) or a fluorescent probe to provide a quantitative and qualitative means of evaluating their whole body, intratumoral, and subcellular distributions. BCM-25 was found to clear rapidly from the plasma compared to BCM-60, leading to an almost 2-fold decrease in their total tumor accumulation. However, the tumoral clearance of BCM-25 was delayed through EGF functionalization, enabling the targeted BCM-25 (T-BCM-25) to achieve a comparable level of total tumor deposition as the nontargeted BCM-60 (NT-BCM-60). Confocal fluorescence microscopy combined with MATLAB analyses revealed that NT-BCM-25 diffuses further away from the blood vessels (D(mean) = 42 +/- 9 microm) following extravasation, compared to NT-BCM-60 which mainly remains in the perivascular regions (D(mean) = 23 +/- 4 microm). The introduction of molecular targeting imposes the "binding site barrier" effect, which retards the tumor penetration of T-BCM-25 (D(mean) = 29 +/- 7 microm, p < 0.05). The intrinsic nuclear translocation property of EGF/EGFR leads to a significant increase in the nuclear uptake of T-BCM-25 in vitro and in vivo via active transport. Overall, these results highlight the need to consider multiple design parameters in the development of nanosystems for delivery of anticancer agents.

  17. Nintendo Wii Fit as an adjunct to physiotherapy following lower limb fractures: preliminary feasibility, safety and sample size considerations.

    Science.gov (United States)

    McPhail, S M; O'Hara, M; Gane, E; Tonks, P; Bullock-Saxton, J; Kuys, S S

    2016-06-01

    The Nintendo Wii Fit integrates virtual gaming with body movement, and may be suitable as an adjunct to conventional physiotherapy following lower limb fractures. This study examined the feasibility and safety of using the Wii Fit as an adjunct to outpatient physiotherapy following lower limb fractures, and reports sample size considerations for an appropriately powered randomised trial. Ambulatory patients receiving physiotherapy following a lower limb fracture participated in this study (n=18). All participants received usual care (individual physiotherapy). The first nine participants also used the Wii Fit under the supervision of their treating clinician as an adjunct to usual care. Adverse events, fracture malunion or exacerbation of symptoms were recorded. Pain, balance and patient-reported function were assessed at baseline and discharge from physiotherapy. No adverse events were attributed to either the usual care physiotherapy or Wii Fit intervention for any patient. Overall, 15 (83%) participants completed both assessments and interventions as scheduled. For 80% power in a clinical trial, the number of complete datasets required in each group to detect a small, medium or large effect of the Wii Fit at a post-intervention assessment was calculated at 175, 63 and 25, respectively. The Nintendo Wii Fit was safe and feasible as an adjunct to ambulatory physiotherapy in this sample. When considering a likely small effect size and the 17% dropout rate observed in this study, 211 participants would be required in each clinical trial group. A larger effect size or multiple repeated measures design would require fewer participants. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  18. Reduction of sample size requirements by bilateral versus unilateral research designs in animal models for cartilage tissue engineering.

    Science.gov (United States)

    Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali; Madry, Henning

    2013-11-01

    Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering.

  19. Comparison of three analytical methods to measure the size of silver nanoparticles in real environmental water and wastewater samples

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Ying-jie [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Shih, Yang-hsin, E-mail: yhs@ntu.edu.tw [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Su, Chiu-Hun [Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310, Taiwan (China); Ho, Han-Chen [Department of Anatomy, Tzu-Chi University, Hualien 970, Taiwan (China)

    2017-01-15

    Highlights: • Three emerging techniques to detect NPs in the aquatic environment were evaluated. • The pretreatment of centrifugation to decrease the interference was established. • Asymmetric flow field flow fractionation has a low recovery of NPs. • Hydrodynamic chromatography is recommended to be a low-cost screening tool. • Single particle ICPMS is recommended to accurately measure trace NPs in water. - Abstract: Due to the widespread application of engineered nanoparticles, their potential risk to ecosystems and human health is of growing concern. Silver nanoparticles (Ag NPs) are one of the most extensively produced NPs. Thus, this study aims to develop a method to detect Ag NPs in different aquatic systems. In complex media, three emerging techniques are compared, including hydrodynamic chromatography (HDC), asymmetric flow field flow fractionation (AF4) and single particle inductively coupled plasma-mass spectrometry (SP-ICP-MS). The pre-treatment procedure of centrifugation is evaluated. HDC can estimate the Ag NP sizes, which were consistent with the results obtained from DLS. AF4 can also determine the size of Ag NPs but with lower recoveries, which could result from the interactions between Ag NPs and the working membrane. For the SP-ICP-MS, both the particle size and concentrations can be determined with high Ag NP recoveries. The particle size resulting from SP-ICP-MS also corresponded to the transmission electron microscopy observation (p > 0.05). Therefore, HDC and SP-ICP-MS are recommended for environmental analysis of the samples after our established pre-treatment process. The findings of this study propose a preliminary technique to more accurately determine the Ag NPs in aquatic environments and to use this knowledge to evaluate the environmental impact of manufactured NPs.

  20. Porous iron pellets for AMS C-14 analysis of small samples down to ultra-microscale size (10-25 mu gC)

    NARCIS (Netherlands)

    de Rooij, M.; van der Plicht, J.; Meijer, H. A. J.

    We developed the use of a porous iron pellet as a catalyst for AMS C-14 analysis of small samples down to ultra-microscale size (10-25 mu gC). It resulted in increased and more stable beam currents through our HVEE 4130 C-14 AMS system, which depend smoothly on the sample size. We find that both the

  1. Sub-Nyquist sampling boosts targeted light transport through opaque scattering media

    CERN Document Server

    Shen, Yuecheng; Ma, Cheng; Wang, Lihong V

    2016-01-01

    Optical time-reversal techniques are being actively developed to focus light through or inside opaque scattering media. When applied to biological tissue, these techniques promise to revolutionize biophotonics by enabling deep-tissue non-invasive optical imaging, optogenetics, optical tweezers and photodynamic therapy. In all previous optical time-reversal experiments, the scattered light field was well-sampled during wavefront measurement and wavefront reconstruction, following the Nyquist sampling criterion. Here, we overturn this conventional practice by demonstrating that even when the scattered field is under-sampled, light can still be focused through or inside opaque media. Even more surprisingly, we show both theoretically and experimentally that the focus achieved by under-sampling is usually about one order of magnitude brighter than that achieved by conventional well-sampling conditions. Moreover, sub-Nyquist sampling improves the signal-to-noise ratio and the collection efficiency of the scattered...

  2. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  3. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    Science.gov (United States)

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No

  4. Sub-Nyquist sampling boosts targeted light transport through opaque scattering media.

    Science.gov (United States)

    Shen, Yuecheng; Liu, Yan; Ma, Cheng; Wang, Lihong V

    2017-01-20

    Optical time-reversal techniques are being actively developed to focus light through or inside opaque scattering media. When applied to biological tissue, these techniques promise to revolutionize biophotonics by enabling deep-tissue non-invasive optical imaging, optogenetics, optical tweezing, and phototherapy. In all previous optical time-reversal experiments, the scattered light field was well-sampled during wavefront measurement and wavefront reconstruction, following the Nyquist sampling criterion. Here, we overturn this conventional practice by demonstrating that even when the scattered field is under-sampled, light can still be focused through or inside scattering media. Even more surprisingly, we show both theoretically and experimentally that the focus achieved by under-sampling can be one order of magnitude brighter than that achieved under the well-sampling conditions used in previous works, where 3×3 to 5×5 pixels were used to sample one speckle grain on average. Moreover, sub-Nyquist sampling improves the signal-to-noise ratio and the collection efficiency of the scattered light. We anticipate that this newly explored under-sampling scheme will transform the understanding of optical time reversal and boost the performance of optical imaging, manipulation, and communication through opaque scattering media.

  5. Monitoring the effective population size of a brown bear (Ursus arctos) population using new single-sample approaches.

    Science.gov (United States)

    Skrbinšek, Tomaž; Jelenčič, Maja; Waits, Lisette; Kos, Ivan; Jerina, Klemen; Trontelj, Peter

    2012-02-01

    The effective population size (N(e) ) could be the ideal parameter for monitoring populations of conservation concern as it conveniently summarizes both the evolutionary potential of the population and its sensitivity to genetic stochasticity. However, tracing its change through time is difficult in natural populations. We applied four new methods for estimating N(e) from a single sample of genotypes to trace temporal change in N(e) for bears in the Northern Dinaric Mountains. We genotyped 510 bears using 20 microsatellite loci and determined their age. The samples were organized into cohorts with regard to the year when the animals were born and yearly samples with age categories for every year when they were alive. We used the Estimator by Parentage Assignment (EPA) to directly estimate both N(e) and generation interval for each yearly sample. For cohorts, we estimated the effective number of breeders (N(b) ) using linkage disequilibrium, sibship assignment and approximate Bayesian computation methods and extrapolated these estimates to N(e) using the generation interval. The N(e) estimate by EPA is 276 (183-350 95% CI), meeting the inbreeding-avoidance criterion of N(e) > 50 but short of the long-term minimum viable population goal of N(e) > 500. The results obtained by the other methods are highly consistent with this result, and all indicate a rapid increase in N(e) probably in the late 1990s and early 2000s. The new single-sample approaches to the estimation of N(e) provide efficient means for including N(e) in monitoring frameworks and will be of great importance for future management and conservation. © 2012 Blackwell Publishing Ltd.

  6. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  7. Translational Targeted Proteomics Profiling of Mitochondrial Energy Metabolic Pathways in Mouse and Human Samples

    NARCIS (Netherlands)

    Wolters, Justina C.; Ciapaite, Jolita; van Eunen, Karen; Niezen-Koning, Klary E.; Matton, Alix; Porte, Robert J.; Horvatovich, Peter; Bakker, Barbara M.; Bischoff, Rainer; Permentier, Hjalmar P.

    Absolute measurements of protein abundance are important in the understanding of biological processes and the precise computational modeling of biological pathways. We developed targeted LC-MS/MS assays in the selected reaction monitoring (SRM) mode to quantify over 50 mitochondrial proteins in a

  8. Microbial profiling of cpn60 universal target sequences in artificial mixtures of vaginal bacteria sampled by nylon swabs or self-sampling devices under different storage conditions.

    Science.gov (United States)

    Schellenberg, John J; Oh, Angela Yena; Hill, Janet E

    2017-05-01

    The vaginal microbiome is increasingly characterized by deep sequencing of universal genes. However, there are relatively few studies of how different specimen collection and sample storage and processing influence these molecular profiles. Here, we evaluate molecular microbial community profiles of samples collected using the HerSwab™ self-sampling device, compared to nylon swabs and under different storage conditions. In order to minimize technical variation, mixtures of 11 common vaginal bacteria in simulated vaginal fluid medium were sampled and DNA extracts prepared for massively parallel sequencing of the cpn60 universal target (UT). Three artificial mixtures imitating commonly observed vaginal microbiome profiles were easily distinguished and proportion of sequence reads correlated with the estimated proportion of the organism added to the artificial mixtures. Our results indicate that cpn60 UT amplicon sequencing quantifies the proportional abundance of member organisms in these artificial communities regardless of swab type or storage conditions, although some significant differences were observed between samples that were stored frozen and thawed prior to DNA extraction, compared to extractions from samples stored at room temperature for up to 7days. Our results indicate that an on-the-market device developed for infectious disease diagnostics may be appropriate for vaginal microbiome profiling, an approach that is increasingly facilitated by rapidly dropping deep sequencing costs. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The effects of composition, temperature and sample size on the sintering of chem-prep high field varistors.

    Energy Technology Data Exchange (ETDEWEB)

    Garino, Terry J.

    2007-09-01

    The sintering behavior of Sandia chem-prep high field varistor materials was studied using techniques including in situ shrinkage measurements, optical and scanning electron microscopy and x-ray diffraction. A thorough literature review of phase behavior, sintering and microstructure in Bi{sub 2}O{sub 3}-ZnO varistor systems is included. The effects of Bi{sub 2}O{sub 3} content (from 0.25 to 0.56 mol%) and of sodium doping level (0 to 600 ppm) on the isothermal densification kinetics was determined between 650 and 825 C. At {ge} 750 C samples with {ge}0.41 mol% Bi{sub 2}O{sub 3} have very similar densification kinetics, whereas samples with {le}0.33 mol% begin to densify only after a period of hours at low temperatures. The effect of the sodium content was greatest at {approx}700 C for standard 0.56 mol% Bi{sub 2}O{sub 3} and was greater in samples with 0.30 mol% Bi{sub 2}O{sub 3} than for those with 0.56 mol%. Sintering experiments on samples of differing size and shape found that densification decreases and mass loss increases with increasing surface area to volume ratio. However, these two effects have different causes: the enhancement in densification as samples increase in size appears to be caused by a low oxygen internal atmosphere that develops whereas the mass loss is due to the evaporation of bismuth oxide. In situ XRD experiments showed that the bismuth is initially present as an oxycarbonate that transforms to metastable {beta}-Bi{sub 2}O{sub 3} by 400 C. At {approx}650 C, coincident with the onset of densification, the cubic binary phase, Bi{sub 38}ZnO{sub 58} forms and remains stable to >800 C, indicating that a eutectic liquid does not form during normal varistor sintering ({approx}730 C). Finally, the formation and morphology of bismuth oxide phase regions that form on the varistors surfaces during slow cooling were studied.

  10. Cone-beam CT-based delineation of stereotactic lung targets: the influence of image modality and target size on interobserver variability.

    Science.gov (United States)

    Altorjai, Gabriela; Fotina, Irina; Lütgendorf-Caucig, Carola; Stock, Markus; Pötter, Richard; Georg, Dietmar; Dieckmann, Karin

    2012-02-01

    It is generally agreed that the safe implementation of stereotactic body radiotherapy requires image guidance. The aim of this work was to assess interobserver variability in the delineation of lung lesions on cone-beam CT (CBCT) images compared with CT-based contouring for adaptive stereotactic body radiotherapy. The influence of target size was also evaluated. Eight radiation oncologists delineated gross tumor volumes in 12 patient cases (non-small cell lung cancer I-II or solitary metastasis) on planning CTs and on CBCTs. Cases were divided into two groups with tumor diameters of less than (Group A) or more than 2 cm (Group B). Comparison of mean volumes delineated by all observers and range and coefficient of variation were reported for each case and image modality. Interobserver variability was assessed by means of standard error of measurement, conformity index (CI), and its generalized observer-independent approach. The variance between single observers on CT and CBCT images was measured via interobserver reliability coefficient. Interobserver variability on CT images was 17% with 0.79 reliability, compared with 21% variability on CBCT and 0.76 reliability. On both image modalities, values of the intraobserver reliability coefficient (0.99 for CT and 0.97 for CBCT) indicated high reproducibility of results. In general, lower interobserver agreement was observed for small lesions (CI(genA) = 0.62 ± 0.06 vs. CI(genB) = 0.70 ± 0.03, p < 0.05). The analysis of single patient cases revealed that presence of spicules, diffuse infiltrations, proximity of the tumors to the vessels and thoracic wall, and respiration motion artifacts presented the main sources of the variability. Interobserver variability for Stage I-II non-small cell lung cancer and lung metastasis was slightly higher on CBCT compared with CT. Absence of significant differences in interobserver variability suggests that CBCT imaging provides an effective tool for tumor localization, and image

  11. Lot quality assurance sampling for monitoring coverage and quality of a targeted condom social marketing programme in traditional and non-traditional outlets in India

    Science.gov (United States)

    Piot, Bram; Navin, Deepa; Krishnan, Nattu; Bhardwaj, Ashish; Sharma, Vivek; Marjara, Pritpal

    2010-01-01

    Objectives This study reports on the results of a large-scale targeted condom social marketing campaign in and around areas where female sex workers are present. The paper also describes the method that was used for the routine monitoring of condom availability in these sites. Methods The lot quality assurance sampling (LQAS) method was used for the assessment of the geographical coverage and quality of coverage of condoms in target areas in four states and along selected national highways in India, as part of Avahan, the India AIDS initiative. Results A significant general increase in condom availability was observed in the intervention area between 2005 and 2008. High coverage rates were gradually achieved through an extensive network of pharmacies and particularly of non-traditional outlets, whereas traditional outlets were instrumental in providing large volumes of condoms. Conclusion LQAS is seen as a valuable tool for the routine monitoring of the geographical coverage and of the quality of delivery systems of condoms and of health products and services in general. With a relatively small sample size, easy data collection procedures and simple analytical methods, it was possible to inform decision-makers regularly on progress towards coverage targets. PMID:20167732

  12. Lot quality assurance sampling for monitoring coverage and quality of a targeted condom social marketing programme in traditional and non-traditional outlets in India.

    Science.gov (United States)

    Piot, Bram; Mukherjee, Amajit; Navin, Deepa; Krishnan, Nattu; Bhardwaj, Ashish; Sharma, Vivek; Marjara, Pritpal

    2010-02-01

    This study reports on the results of a large-scale targeted condom social marketing campaign in and around areas where female sex workers are present. The paper also describes the method that was used for the routine monitoring of condom availability in these sites. The lot quality assurance sampling (LQAS) method was used for the assessment of the geographical coverage and quality of coverage of condoms in target areas in four states and along selected national highways in India, as part of Avahan, the India AIDS initiative. A significant general increase in condom availability was observed in the intervention area between 2005 and 2008. High coverage rates were gradually achieved through an extensive network of pharmacies and particularly of non-traditional outlets, whereas traditional outlets were instrumental in providing large volumes of condoms. LQAS is seen as a valuable tool for the routine monitoring of the geographical coverage and of the quality of delivery systems of condoms and of health products and services in general. With a relatively small sample size, easy data collection procedures and simple analytical methods, it was possible to inform decision-makers regularly on progress towards coverage targets.

  13. Reversible phospholipid nanogels for deoxyribonucleic acid fragment size determinations up to 1500 base pairs and integrated sample stacking.

    Science.gov (United States)

    Durney, Brandon C; Bachert, Beth A; Sloane, Hillary S; Lukomski, Slawomir; Landers, James P; Holland, Lisa A

    2015-06-23

    Phospholipid additives are a cost-effective medium to separate deoxyribonucleic acid (DNA) fragments and possess a thermally-responsive viscosity. This provides a mechanism to easily create and replace a highly viscous nanogel in a narrow bore capillary with only a 10°C change in temperature. Preparations composed of dimyristoyl-sn-glycero-3-phosphocholine (DMPC) and 1,2-dihexanoyl-sn-glycero-3-phosphocholine (DHPC) self-assemble, forming structures such as nanodisks and wormlike micelles. Factors that influence the morphology of a particular DMPC-DHPC preparation include the concentration of lipid in solution, the temperature, and the ratio of DMPC and DHPC. It has previously been established that an aqueous solution containing 10% phospholipid with a ratio of [DMPC]/[DHPC]=2.5 separates DNA fragments with nearly single base resolution for DNA fragments up to 500 base pairs in length, but beyond this size the resolution decreases dramatically. A new DMPC-DHPC medium is developed to effectively separate and size DNA fragments up to 1500 base pairs by decreasing the total lipid concentration to 2.5%. A 2.5% phospholipid nanogel generates a resolution of 1% of the DNA fragment size up to 1500 base pairs. This increase in the upper size limit is accomplished using commercially available phospholipids at an even lower material cost than is achieved with the 10% preparation. The separation additive is used to evaluate size markers ranging between 200 and 1500 base pairs in order to distinguish invasive strains of Streptococcus pyogenes and Aspergillus species by harnessing differences in gene sequences of collagen-like proteins in these organisms. For the first time, a reversible stacking gel is integrated in a capillary sieving separation by utilizing the thermally-responsive viscosity of these self-assembled phospholipid preparations. A discontinuous matrix is created that is composed of a cartridge of highly viscous phospholipid assimilated into a separation matrix

  14. Transgender Population Size in the United States: a Meta-Regression of Population-Based Probability Samples.

    Science.gov (United States)

    Meerwijk, Esther L; Sevelius, Jae M

    2017-02-01

    Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as "gray" literature, through an Internet search. We limited the search to 2006 through 2016. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask about transgender identity may account for residual heterogeneity in our models. Public health implications. Under- or nonrepresentation

  15. Transgender Population Size in the United States: a Meta-Regression of Population-Based Probability Samples

    Science.gov (United States)

    Sevelius, Jae M.

    2017-01-01

    Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask

  16. An iterative and targeted sampling design informed by habitat suitability models for detecting focal plant species over extensive areas.

    Science.gov (United States)

    Wang, Ophelia; Zachmann, Luke J; Sesnie, Steven E; Olsson, Aaryn D; Dickson, Brett G

    2014-01-01

    Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1) detecting non-native invasive plants across previously unsampled gradients, and 2) characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The sampling methods

  17. An iterative and targeted sampling design informed by habitat suitability models for detecting focal plant species over extensive areas.

    Directory of Open Access Journals (Sweden)

    Ophelia Wang

    Full Text Available Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1 detecting non-native invasive plants across previously unsampled gradients, and 2 characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The

  18. atpE gene as a new useful specific molecular target to quantify Mycobacterium in environmental samples

    Science.gov (United States)

    2013-01-01

    Background The environment is the likely source of many pathogenic mycobacterial species but detection of mycobacteria by bacteriological tools is generally difficult and time-consuming. Consequently, several molecular targets based on the sequences of housekeeping genes, non-functional RNA and structural ribosomal RNAs have been proposed for the detection and identification of mycobacteria in clinical or environmental samples. While certain of these targets were proposed as specific for this genus, most are prone to false positive results in complex environmental samples that include related, but distinct, bacterial genera. Nowadays the increased number of sequenced genomes and the availability of software for genomic comparison provide tools to develop novel, mycobacteria-specific targets, and the associated molecular probes and primers. Consequently, we conducted an in silico search for proteins exclusive to Mycobacterium spp. genomes in order to design sensitive and specific molecular targets. Results Among the 3989 predicted proteins from M. tuberculosis H37Rv, only 11 proteins showed 80% to 100% of similarity with Mycobacterium spp. genomes, and less than 50% of similarity with genomes of closely related Corynebacterium, Nocardia and Rhodococcus genera. Based on DNA sequence alignments, we designed primer pairs and a probe that specifically detect the atpE gene of mycobacteria, as verified by quantitative real-time PCR on a collection of mycobacteria and non-mycobacterial species. The real-time PCR method we developed was successfully used to detect mycobacteria in tap water and lake samples. Conclusions The results indicate that this real-time PCR method targeting the atpE gene can serve for highly specific detection and precise quantification of Mycobacterium spp. in environmental samples. PMID:24299240

  19. The Examination of Model Fit Indexes with Different Estimation Methods under Different Sample Sizes in Confirmatory Factor Analysis

    Directory of Open Access Journals (Sweden)

    Ayfer SAYIN

    2016-12-01

    Full Text Available In adjustment studies of scales and in terms of cross validity at scale development, confirmatory factor analysis is conducted. Confirmatory factor analysis, multivariate statistics, is estimated via various parameter estimation methods and utilizes several fit indexes for evaluating the model fit. In this study, model fit indexes utilized in confirmatory factor analysis are examined with different parameter estimation methods under different sample sizes. For the purpose of this study, answers of 60, 100, 250, 500 and 1000 students who attended PISA 2012 program were pulled from the answers to two dimensional “thoughts on the importance of mathematics” dimension. Estimations were based on methods of maximum likelihood (ML, unweighted least squares (ULS and generalized least squares (GLS. As a result of the study, it was found that model fit indexes were affected by the conditions, however some fit indexes were affected less than others and vice versa. In order to analyze these, some suggestions were made.

  20. Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.

    Science.gov (United States)

    Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B

    2017-01-01

    Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.

  1. Delineamento experimental e tamanho de amostra para alface cultivada em hidroponia Experimental design and sample size for hydroponic lettuce crop

    Directory of Open Access Journals (Sweden)

    Valéria Schimitz Marodim

    2000-10-01

    Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.

  2. FTRIFS biosensor based on double layer porous silicon as a LC detector for target molecule screening from complex samples.

    Science.gov (United States)

    Shang, Yunling; Zhao, Weijie; Xu, Erchao; Tong, Changlun; Wu, Jianmin

    2010-01-15

    Post-column identification of target compounds in complex samples is one of the major tasks in drug screening and discovery. In this work, we demonstrated that double layer porous silicon (PSi) attached with affinity ligand could serve as a sensing element for post-column detection of target molecule by Fourier transformed reflectometric interference spectroscopy (FTRIFS), in which trypsin and its inhibitor were used as the model probe-target system. The double layer porous silicon was prepared by electrical etching with a current density of 500 mA/cm(2), followed by 167 mA/cm(2). Optical measurements indicated that trypsin could infiltrate into the outer porous layer (porosity 83.6%), but was excluded by the bottom layer (porosity 52%). The outer layer, attached with trypsin by standard amino-silane and glutaraldehyde chemistry, could specifically bind with the trypsin inhibitor, acting as a sample channel, while the bottom layer served as a reference signal channel. The binding event between the attached trypsin and trypsin inhibitor