WorldWideScience

Sample records for reduce sample size

  1. A simulated Experiment for Sampling Soil Micriarthropods to Reduce Sample Size

    OpenAIRE

    Tamura, Hiroshi

    1987-01-01

    An experiment was conducted to examine a possibility of reducing the necessary sample size in a quantitative survey on soil microarthropods, using soybeans instead of animals. An artificially provided, intensely aggregated distribution pattern of soybeans was easily transformed to the random pattern by stirring the substrate, which is soil in a large cardboard box. This enabled the necessary sample size to be greatly reduced without sacrificing the statistical reliability. A new practical met...

  2. Reduced Sampling Size with Nanopipette for Tapping-Mode Scanning Probe Electrospray Ionization Mass Spectrometry Imaging.

    Science.gov (United States)

    Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi

    2016-01-01

    Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated.

  3. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2017-10-03

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H0 : ES = 0 versus alternative hypotheses H1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  4. Sample size for beginners.

    OpenAIRE

    Florey, C D

    1993-01-01

    The common failure to include an estimation of sample size in grant proposals imposes a major handicap on applicants, particularly for those proposing work in any aspect of research in the health services. Members of research committees need evidence that a study is of adequate size for there to be a reasonable chance of a clear answer at the end. A simple illustrated explanation of the concepts in determining sample size should encourage the faint hearted to pay more attention to this increa...

  5. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  6. Measuring proteins with greater speed and resolution while reducing sample size

    OpenAIRE

    Hsieh, Vincent H.; Wyatt, Philip J.

    2017-01-01

    A multi-angle light scattering (MALS) system, combined with chromatographic separation, directly measures the absolute molar mass, size and concentration of the eluate species. The measurement of these crucial properties in solution is essential in basic macromolecular characterization and all research and production stages of bio-therapeutic products. We developed a new MALS methodology that has overcome the long-standing, stubborn barrier to microliter-scale peak volumes and achieved the hi...

  7. Determination of Sample Size

    OpenAIRE

    Naing, Nyi Nyi

    2003-01-01

    There is a particular importance of determining a basic minimum required ‘n’ size of the sample to recognize a particular measurement of a particular population. This article has highlighted the determination of an appropriate size to estimate population parameters.

  8. Use of High-Frequency In-Home Monitoring Data May Reduce Sample Sizes Needed in Clinical Trials.

    Directory of Open Access Journals (Sweden)

    Hiroko H Dodge

    walking speed collected at baseline, 262 subjects are required. Similarly for computer use, 26 subjects are required.Individual-specific thresholds of low functional performance based on high-frequency in-home monitoring data distinguish trajectories of MCI from NC and could substantially reduce sample sizes needed in dementia prevention RCTs.

  9. Sample size for beginners.

    Science.gov (United States)

    Florey, C D

    1993-05-01

    The common failure to include an estimation of sample size in grant proposals imposes a major handicap on applicants, particularly for those proposing work in any aspect of research in the health services. Members of research committees need evidence that a study is of adequate size for there to be a reasonable chance of a clear answer at the end. A simple illustrated explanation of the concepts in determining sample size should encourage the faint hearted to pay more attention to this increasingly important aspect of grantsmanship.

  10. Ethics and sample size.

    Science.gov (United States)

    Bacchetti, Peter; Wolf, Leslie E; Segal, Mark R; McCulloch, Charles E

    2005-01-15

    The belief is widespread that studies are unethical if their sample size is not large enough to ensure adequate power. The authors examine how sample size influences the balance that determines the ethical acceptability of a study: the balance between the burdens that participants accept and the clinical or scientific value that a study can be expected to produce. The average projected burden per participant remains constant as the sample size increases, but the projected study value does not increase as rapidly as the sample size if it is assumed to be proportional to power or inversely proportional to confidence interval width. This implies that the value per participant declines as the sample size increases and that smaller studies therefore have more favorable ratios of projected value to participant burden. The ethical treatment of study participants therefore does not require consideration of whether study power is less than the conventional goal of 80% or 90%. Lower power does not make a study unethical. The analysis addresses only ethical acceptability, not optimality; large studies may be desirable for other than ethical reasons.

  11. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  12. Selectively Reduced Posterior Corpus Callosum Size in a Population-Based Sample of Young Adults Born with Low Birth Weight

    DEFF Research Database (Denmark)

    Aukland, S M; Westerhausen, R; Plessen, K J

    2011-01-01

    BACKGROUND AND PURPOSE: Several studies suggest that VLBW is associated with a reduced CC size later in life. We aimed to clarify this in a prospective, controlled study of 19-year-olds, hypothesizing that those with LBWs had smaller subregions of CC than the age-matched controls, even after...

  13. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  14. How to calculate sample size and why.

    Science.gov (United States)

    Kim, Jeehyoung; Seo, Bong Soo

    2013-09-01

    Calculating the sample size is essential to reduce the cost of a study and to prove the hypothesis effectively. Referring to pilot studies and previous research studies, we can choose a proper hypothesis and simplify the studies by using a website or Microsoft Excel sheet that contains formulas for calculating sample size in the beginning stage of the study. There are numerous formulas for calculating the sample size for complicated statistics and studies, but most studies can use basic calculating methods for sample size calculation.

  15. Sample size determination for the fluctuation experiment.

    Science.gov (United States)

    Zheng, Qi

    2017-01-01

    The Luria-Delbrück fluctuation experiment protocol is increasingly employed to determine microbial mutation rates in the laboratory. An important question raised at the planning stage is "How many cultures are needed?" For over 70 years sample sizes have been determined either by intuition or by following published examples where sample sizes were chosen intuitively. This paper proposes a practical method for determining the sample size. The proposed method relies on existing algorithms for computing the expected Fisher information under two commonly used mutant distributions. The role of partial plating in reducing sample size is discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  17. Conducting EQ-5D Valuation Studies in Resource-Constrained Countries: The Potential Use of Shrinkage Estimators to Reduce Sample Size.

    Science.gov (United States)

    Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M

    2018-01-01

    Resource-constrained countries have difficulty conducting large EQ-5D valuation studies, which limits their ability to conduct cost-utility analyses using a value set specific to their own population. When estimates of similar but related parameters are available, shrinkage estimators reduce uncertainty and yield estimators with smaller mean square error (MSE). We hypothesized that health utilities based on shrinkage estimators can reduce MSE and mean absolute error (MAE) when compared to country-specific health utilities. We conducted a simulation study (1,000 iterations) based on the observed means and standard deviations (or standard errors) of the EQ-5D-3L valuation studies from 14 counties. In each iteration, the simulated data were fitted with the model based on the country-specific functional form of the scoring algorithm to create country-specific health utilities ("naïve" estimators). Shrinkage estimators were calculated based on the empirical Bayes estimation methods. The performance of shrinkage estimators was compared with those of the naïve estimators over a range of different sample sizes based on MSE, MAE, mean bias, standard errors and the width of confidence intervals. The MSE of the shrinkage estimators was smaller than the MSE of the naïve estimators on average, as theoretically predicted. Importantly, the MAE of the shrinkage estimators was also smaller than the MAE of the naïve estimators on average. In addition, the reduction in MSE with the use of shrinkage estimators did not substantially increase bias. The degree of reduction in uncertainty by shrinkage estimators is most apparent in valuation studies with small sample size. Health utilities derived from shrinkage estimation allow valuation studies with small sample size to "borrow strength" from other valuation studies to reduce uncertainty.

  18. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  19. Machine Learning classification of MRI features of Alzheimer's disease and mild cognitive impairment subjects to reduce the sample size in clinical trials.

    Science.gov (United States)

    Escudero, Javier; Zajicek, John P; Ifeachor, Emmanuel

    2011-01-01

    There is a need for objective tools to help clinicians to diagnose Alzheimer's Disease (AD) early and accurately and to conduct Clinical Trials (CTs) with fewer patients. Magnetic Resonance Imaging (MRI) is a promising AD biomarker but no single MRI feature is optimal for all disease stages. Machine Learning classification can address these challenges. In this study, we have investigated the classification of MRI features from AD, Mild Cognitive Impairment (MCI), and control subjects from ADNI with four techniques. The highest accuracy rates for the classification of controls against ADs and MCIs were 89.2% and 72.7%, respectively. Moreover, we used the classifiers to select AD and MCI subjects who are most likely to decline for inclusion in hypothetical CTs. Using the hippocampal volume as an outcome measure, we found that the required group sizes for the CTs were reduced from 197 to 117 AD patients and from 366 to 215 MCI subjects.

  20. Additional Considerations in Determining Sample Size.

    Science.gov (United States)

    Levin, Joel R.; Subkoviak, Michael J.

    Levin's (1975) sample-size determination procedure for completely randomized analysis of variance designs is extended to designs in which antecedent or blocking variables information is considered. In particular, a researcher's choice of designs is framed in terms of determining the respective sample sizes necessary to detect specified contrasts…

  1. Determining Sample Size for Research Activities

    Science.gov (United States)

    Krejcie, Robert V.; Morgan, Daryle W.

    1970-01-01

    A formula for determining sample size, which originally appeared in 1960, has lacked a table for easy reference. This article supplies a graph of the function and a table of values which permits easy determination of the size of sample needed to be representative of a given population. (DG)

  2. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...

  3. Biostatistics Series Module 5: Determining Sample Size.

    Science.gov (United States)

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 - β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the

  4. Basic Statistical Concepts for Sample Size Estimation

    Directory of Open Access Journals (Sweden)

    Vithal K Dhulkhed

    2008-01-01

    Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.

  5. Corticosteroid injections reduce size of rheumatoid nodules

    NARCIS (Netherlands)

    Baan, H.; Baan, H.; Haagsma, C.J.; van de Laar, Mart A F J

    2006-01-01

    Background: Symptomatic rheumatoid nodules are frequently surgically treated. Injection with steroids might be an alternative treatment. Patients and methods: To determine whether injection with triamcinolon acetonide reduces the size of rheumatoid nodules, we randomized twenty patients with

  6. Particle size distribution in ground biological samples.

    Science.gov (United States)

    Koglin, D; Backhaus, F; Schladot, J D

    1997-05-01

    Modern trace and retrospective analysis of Environmental Specimen Bank (ESB) samples require surplus material prepared and characterized as reference materials. Before the biological samples could be analyzed and stored for long periods at cryogenic temperatures, the materials have to be pre-crushed. As a second step, a milling and homogenization procedure has to follow. For this preparation, a grinding device is cooled with liquid nitrogen to a temperature of -190 degrees C. It is a significant condition for homogeneous samples that at least 90% of the particles should be smaller than 200 microns. In the German ESB the particle size distribution of the processed material is determined by means of a laser particle sizer. The decrease of particle sizes of deer liver and bream muscles after different grinding procedures as well as the consequences of ultrasonic treatment of the sample before particle size measurements have been investigated.

  7. Determining sample size for tree utilization surveys

    Science.gov (United States)

    Stanley J. Zarnoch; James W. Bentley; Tony G. Johnson

    2004-01-01

    The U.S. Department of Agriculture Forest Service has conducted many studies to determine what proportion of the timber harvested in the South is actually utilized. This paper describes the statistical methods used to determine required sample sizes for estimating utilization ratios for a required level of precision. The data used are those for 515 hardwood and 1,557...

  8. Improving your Hypothesis Testing: Determining Sample Sizes.

    Science.gov (United States)

    Luftig, Jeffrey T.; Norton, Willis P.

    1982-01-01

    This article builds on an earlier discussion of the importance of the Type II error (beta) and power to the hypothesis testing process (CE 511 484), and illustrates the methods by which sample size calculations should be employed so as to improve the research process. (Author/CT)

  9. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  10. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  11. Determining sample size when assessing mean equivalence.

    Science.gov (United States)

    Asberg, Arne; Solem, Kristine B; Mikkelsen, Gustav

    2014-11-01

    When we want to assess whether two analytical methods are equivalent, we could test if the difference between the mean results is within the specification limits of 0 ± an acceptance criterion. Testing the null hypothesis of zero difference is less interesting, and so is the sample size estimation based on testing that hypothesis. Power function curves for equivalence testing experiments are not widely available. In this paper we present power function curves to help decide on the number of measurements when testing equivalence between the means of two analytical methods. Computer simulation was used to calculate the probability that the 90% confidence interval for the difference between the means of two analytical methods would exceed the specification limits of 0 ± 1, 0 ± 2 or 0 ± 3 analytical standard deviations (SDa), respectively. The probability of getting a nonequivalence alarm increases with increasing difference between the means when the difference is well within the specification limits. The probability increases with decreasing sample size and with smaller acceptance criteria. We may need at least 40-50 measurements with each analytical method when the specification limits are 0 ± 1 SDa, and 10-15 and 5-10 when the specification limits are 0 ± 2 and 0 ± 3 SDa, respectively. The power function curves provide information of the probability of false alarm, so that we can decide on the sample size under less uncertainty.

  12. Sample size calculations for skewed distributions.

    Science.gov (United States)

    Cundill, Bonnie; Alexander, Neal D E

    2015-04-02

    Sample size calculations should correspond to the intended method of analysis. Nevertheless, for non-normal distributions, they are often done on the basis of normal approximations, even when the data are to be analysed using generalized linear models (GLMs). For the case of comparison of two means, we use GLM theory to derive sample size formulae, with particular cases being the negative binomial, Poisson, binomial, and gamma families. By simulation we estimate the performance of normal approximations, which, via the identity link, are special cases of our approach, and for common link functions such as the log. The negative binomial and gamma scenarios are motivated by examples in hookworm vaccine trials and insecticide-treated materials, respectively. Calculations on the link function (log) scale work well for the negative binomial and gamma scenarios examined and are often superior to the normal approximations. However, they have little advantage for the Poisson and binomial distributions. The proposed method is suitable for sample size calculations for comparisons of means of highly skewed outcome variables.

  13. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey.

  14. Defining sample size and sampling strategy for dendrogeomorphic rockfall reconstructions

    Science.gov (United States)

    Morel, Pauline; Trappmann, Daniel; Corona, Christophe; Stoffel, Markus

    2015-05-01

    Optimized sampling strategies have been recently proposed for dendrogeomorphic reconstructions of mass movements with a large spatial footprint, such as landslides, snow avalanches, and debris flows. Such guidelines have, by contrast, been largely missing for rockfalls and cannot be transposed owing to the sporadic nature of this process and the occurrence of individual rocks and boulders. Based on a data set of 314 European larch (Larix decidua Mill.) trees (i.e., 64 trees/ha), growing on an active rockfall slope, this study bridges this gap and proposes an optimized sampling strategy for the spatial and temporal reconstruction of rockfall activity. Using random extractions of trees, iterative mapping, and a stratified sampling strategy based on an arbitrary selection of trees, we investigate subsets of the full tree-ring data set to define optimal sample size and sampling design for the development of frequency maps of rockfall activity. Spatially, our results demonstrate that the sampling of only 6 representative trees per ha can be sufficient to yield a reasonable mapping of the spatial distribution of rockfall frequencies on a slope, especially if the oldest and most heavily affected individuals are included in the analysis. At the same time, however, sampling such a low number of trees risks causing significant errors especially if nonrepresentative trees are chosen for analysis. An increased number of samples therefore improves the quality of the frequency maps in this case. Temporally, we demonstrate that at least 40 trees/ha are needed to obtain reliable rockfall chronologies. These results will facilitate the design of future studies, decrease the cost-benefit ratio of dendrogeomorphic studies and thus will permit production of reliable reconstructions with reasonable temporal efforts.

  15. Sample size estimation and sampling techniques for selecting a representative sample

    OpenAIRE

    Aamir Omair

    2014-01-01

    Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect ...

  16. Sample size matters: investigating the effect of sample size on a logistic regression debris flow susceptibility model

    Science.gov (United States)

    Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

    2013-06-01

    Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial datasets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In view of these results, we

  17. Sample size matters: investigating the effect of sample size on a logistic regression susceptibility model for debris flows

    Science.gov (United States)

    Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

    2014-02-01

    Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable and reproducible results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and they approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial data sets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In

  18. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  19. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    Science.gov (United States)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  20. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  1. Sample Size Growth with an Increasing Number of Comparisons

    Directory of Open Access Journals (Sweden)

    Chi-Hong Tseng

    2012-01-01

    Full Text Available An appropriate sample size is crucial for the success of many studies that involve a large number of comparisons. Sample size formulas for testing multiple hypotheses are provided in this paper. They can be used to determine the sample sizes required to provide adequate power while controlling familywise error rate or false discovery rate, to derive the growth rate of sample size with respect to an increasing number of comparisons or decrease in effect size, and to assess reliability of study designs. It is demonstrated that practical sample sizes can often be achieved even when adjustments for a large number of comparisons are made as in many genomewide studies.

  2. An expert system for the calculation of sample size.

    Science.gov (United States)

    Ebell, M H; Neale, A V; Hodgkins, B J

    1994-06-01

    Calculation of sample size is a useful technique for researchers who are designing a study, and for clinicians who wish to interpret research findings. The elements that must be specified to calculate the sample size include alpha, beta, Type I and Type II errors, 1- and 2-tail tests, confidence intervals, and confidence levels. A computer software program written by one of the authors (MHE), Sample Size Expert, facilitates sample size calculations. The program uses an expert system to help inexperienced users calculate sample sizes for analytic and descriptive studies. The software is available at no cost from the author or electronically via several on-line information services.

  3. Inventions on reducing keyboard size: A TRIZ based analysis

    OpenAIRE

    Mishra, Umakant

    2013-01-01

    A conventional computer keyboard consists of as many as 101 keys. The keyboard has several sections, such as text entry section, navigation section, and numeric keypad etc. and each having several keys on the keyboard. The size of the keyboard is a major inconvenience for portable computers, as they cannot be carried easily. Thus there are certain circumstances which compels to reduce the size of a keyboard. Reducing the size of a keyboard leads to several problems. A reduced size keyboard ma...

  4. Optimal flexible sample size design with robust power.

    Science.gov (United States)

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Planning Educational Research: Determining the Necessary Sample Size.

    Science.gov (United States)

    Olejnik, Stephen F.

    1984-01-01

    This paper discusses the sample size problem and four factors affecting its solution: significance level, statistical power, analysis procedure, and effect size. The interrelationship between these factors is discussed and demonstrated by calculating minimal sample size requirements for a variety of research conditions. (Author)

  6. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  7. Sample size determination in medical and surgical research.

    Science.gov (United States)

    Flikkema, Robert M; Toledo-Pereyra, Luis H

    2012-02-01

    One of the most critical yet frequently misunderstood principles of research is sample size determination. Obtaining an inadequate sample is a serious problem that can invalidate an entire study. Without an extensive background in statistics, the seemingly simple question of selecting a sample size can become quite a daunting task. This article aims to give a researcher with no background in statistics the basic tools needed for sample size determination. After reading this article, the researcher will be aware of all the factors involved in a power analysis and will be able to work more effectively with the statistician when determining sample size. This work also reviews the power of a statistical hypothesis, as well as how to estimate the effect size of a research study. These are the two key components of sample size determination. Several examples will be considered throughout the text.

  8. A review of software for sample size determination.

    Science.gov (United States)

    Dattalo, Patrick

    2009-09-01

    The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities.

  9. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  10. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    Science.gov (United States)

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  11. Determination of the optimal sample size for a clinical trial accounting for the population size

    Science.gov (United States)

    Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2016-01-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision‐theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two‐arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. PMID:27184938

  12. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  13. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  14. Estimating population size with correlated sampling unit estimates

    Science.gov (United States)

    David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

    2003-01-01

    Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on mark–recapture or distance sampling methods occur...

  15. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    sample size for case–control association studies is discussed. Materials and methods. Parameter settings. We consider a candidate locus with two alleles A and a where. A is putatively associated with the disease status (increasing. Keywords. sample size; association tests; genotype relative risk; power; autism. Journal of ...

  16. Understanding Power and Rules of Thumb for Determining Sample Sizes

    OpenAIRE

    Betsy L. Morgan; Carmen R. Wilson Van Voorhis

    2007-01-01

    This article addresses the definition of power and its relationship to Type I and Type II errors. We discuss the relationship of sample size and power. Finally, we offer statistical rules of thumb guiding the selection of sample sizes large enough for sufficient power to detecting differences, associations, chi-square, and factor analyses.

  17. Understanding Power and Rules of Thumb for Determining Sample Sizes

    Directory of Open Access Journals (Sweden)

    Betsy L. Morgan

    2007-09-01

    Full Text Available This article addresses the definition of power and its relationship to Type I and Type II errors. We discuss the relationship of sample size and power. Finally, we offer statistical rules of thumb guiding the selection of sample sizes large enough for sufficient power to detecting differences, associations, chi-square, and factor analyses.

  18. Sample Size and Statistical Power Calculation in Genetic Association Studies

    Directory of Open Access Journals (Sweden)

    Eun Pyo Hong

    2012-06-01

    Full Text Available A sample size with sufficient statistical power is critical to the success of genetic association studies to detect causal genes of human complex diseases. Genome-wide association studies require much larger sample sizes to achieve an adequate statistical power. We estimated the statistical power with increasing numbers of markers analyzed and compared the sample sizes that were required in case-control studies and case-parent studies. We computed the effective sample size and statistical power using Genetic Power Calculator. An analysis using a larger number of markers requires a larger sample size. Testing a single-nucleotide polymorphism (SNP marker requires 248 cases, while testing 500,000 SNPs and 1 million markers requires 1,206 cases and 1,255 cases, respectively, under the assumption of an odds ratio of 2, 5% disease prevalence, 5% minor allele frequency, complete linkage disequilibrium (LD, 1:1 case/control ratio, and a 5% error rate in an allelic test. Under a dominant model, a smaller sample size is required to achieve 80% power than other genetic models. We found that a much lower sample size was required with a strong effect size, common SNP, and increased LD. In addition, studying a common disease in a case-control study of a 1:4 case-control ratio is one way to achieve higher statistical power. We also found that case-parent studies require more samples than case-control studies. Although we have not covered all plausible cases in study design, the estimates of sample size and statistical power computed under various assumptions in this study may be useful to determine the sample size in designing a population-based genetic association study.

  19. Considerations in determining sample size for pilot studies.

    Science.gov (United States)

    Hertzog, Melody A

    2008-04-01

    There is little published guidance concerning how large a pilot study should be. General guidelines, for example using 10% of the sample required for a full study, may be inadequate for aims such as assessment of the adequacy of instrumentation or providing statistical estimates for a larger study. This article illustrates how confidence intervals constructed around a desired or anticipated value can help determine the sample size needed. Samples ranging in size from 10 to 40 per group are evaluated for their adequacy in providing estimates precise enough to meet a variety of possible aims. General sample size guidelines by type of aim are offered.

  20. Determining the sample size required for a community radon survey.

    Science.gov (United States)

    Chen, Jing; Tracy, Bliss L; Zielinski, Jan M; Moir, Deborah

    2008-04-01

    Radon measurements in homes and other buildings have been included in various community health surveys often dealing with only a few hundred randomly sampled households. It would be interesting to know whether such a small sample size can adequately represent the radon distribution in a large community. An analysis of radon measurement data obtained from the Winnipeg case-control study with randomly sampled subsets of different sizes has showed that a sample size of one to several hundred can serve the survey purpose well.

  1. Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.

    Science.gov (United States)

    Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald

    2017-04-01

    The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: 60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each

  2. Post-stratified estimation: with-in strata and total sample size recommendations

    Science.gov (United States)

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  3. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  4. Methods for sample size determination in cluster randomized trials.

    Science.gov (United States)

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

  5. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  6. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Experimental Analysis of Reduced-Sized Coplanar Waveguide Transmission Lines

    Science.gov (United States)

    Ponchak, George E.

    2002-01-01

    An experimental investigation of the use of capacitive loading of coplanar waveguides to reduce their line length and, thus the size, of monolithic microwave integrated circuits is presented. The reduced sized coplanar waveguides are compared to unloaded transmission lines and to lumped element transmission line segments. The phase bandwidth, defined by 2 percent error in S(sub 21), and the return loss bandwidth, defined by a return loss greater than 15 dB, of coplanar waveguides reduced from 0 to 90 percent are compared, and the insertion loss as a function of the size reduction is presented.

  8. Determining the effective sample size of a parametric prior.

    Science.gov (United States)

    Morita, Satoshi; Thall, Peter F; Müller, Peter

    2008-06-01

    We present a definition for the effective sample size of a parametric prior distribution in a Bayesian model, and propose methods for computing the effective sample size in a variety of settings. Our approach first constructs a prior chosen to be vague in a suitable sense, and updates this prior to obtain a sequence of posteriors corresponding to each of a range of sample sizes. We then compute a distance between each posterior and the parametric prior, defined in terms of the curvature of the logarithm of each distribution, and the posterior minimizing the distance defines the effective sample size of the prior. For cases where the distance cannot be computed analytically, we provide a numerical approximation based on Monte Carlo simulation. We provide general guidelines for application, illustrate the method in several standard cases where the answer seems obvious, and then apply it to some nonstandard settings.

  9. Effects of Mesh Size on Sieved Samples of Corophium volutator

    Science.gov (United States)

    Crewe, Tara L.; Hamilton, Diana J.; Diamond, Antony W.

    2001-08-01

    Corophium volutator (Pallas), gammaridean amphipods found on intertidal mudflats, are frequently collected in mud samples sieved on mesh screens. However, mesh sizes used vary greatly among studies, raising the possibility that sampling methods bias results. The effect of using different mesh sizes on the resulting size-frequency distributions of Corophium was tested by collecting Corophium from mud samples with 0·5 and 0·25 mm sieves. More than 90% of Corophium less than 2 mm long passed through the larger sieve. A significantly smaller, but still substantial, proportion of 2-2·9 mm Corophium (30%) was also lost. Larger size classes were unaffected by mesh size. Mesh size significantly changed the observed size-frequency distribution of Corophium, and effects varied with sampling date. It is concluded that a 0·5 mm sieve is suitable for studies concentrating on adults, but to accurately estimate Corophium density and size-frequency distributions, a 0·25 mm sieve must be used.

  10. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  11. Planning Longitudinal Field Studies: Considerations in Determining Sample Size.

    Science.gov (United States)

    St.Pierre, Robert G.

    1980-01-01

    Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)

  12. Investigating the impact of sample size on cognate detection

    OpenAIRE

    List, Johann-Mattis

    2013-01-01

    International audience; In historical linguistics, the problem of cognate detection is traditionally approached within the frame-work of the comparative method. Since the method is usually carried out manually, it is very flexible regarding its input parameters. However, while the number of languages and the selection of comparanda is not important for the successfull application of the method, the sample size of the comparanda is. In order to shed light on the impact of sample size on cognat...

  13. Sample size requirements for training high-dimensional risk predictors.

    Science.gov (United States)

    Dobbin, Kevin K; Song, Xiao

    2013-09-01

    A common objective of biomarker studies is to develop a predictor of patient survival outcome. Determining the number of samples required to train a predictor from survival data is important for designing such studies. Existing sample size methods for training studies use parametric models for the high-dimensional data and cannot handle a right-censored dependent variable. We present a new training sample size method that is non-parametric with respect to the high-dimensional vectors, and is developed for a right-censored response. The method can be applied to any prediction algorithm that satisfies a set of conditions. The sample size is chosen so that the expected performance of the predictor is within a user-defined tolerance of optimal. The central method is based on a pilot dataset. To quantify uncertainty, a method to construct a confidence interval for the tolerance is developed. Adequacy of the size of the pilot dataset is discussed. An alternative model-based version of our method for estimating the tolerance when no adequate pilot dataset is available is presented. The model-based method requires a covariance matrix be specified, but we show that the identity covariance matrix provides adequate sample size when the user specifies three key quantities. Application of the sample size method to two microarray datasets is discussed.

  14. Sample Size Requirements for Traditional and Regression-Based Norms.

    Science.gov (United States)

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-04-01

    Test norms enable determining the position of an individual test taker in the group. The most frequently used approach to obtain test norms is traditional norming. Regression-based norming may be more efficient than traditional norming and is rapidly growing in popularity, but little is known about its technical properties. A simulation study was conducted to compare the sample size requirements for traditional and regression-based norming by examining the 95% interpercentile ranges for percentile estimates as a function of sample size, norming method, size of covariate effects on the test score, test length, and number of answer categories in an item. Provided the assumptions of the linear regression model hold in the data, for a subdivision of the total group into eight equal-size subgroups, we found that regression-based norming requires samples 2.5 to 5.5 times smaller than traditional norming. Sample size requirements are presented for each norming method, test length, and number of answer categories. We emphasize that additional research is needed to establish sample size requirements when the assumptions of the linear regression model are violated. © The Author(s) 2015.

  15. Mini-batch stochastic gradient descent with dynamic sample sizes

    OpenAIRE

    Metel, Michael R.

    2017-01-01

    We focus on solving constrained convex optimization problems using mini-batch stochastic gradient descent. Dynamic sample size rules are presented which ensure a descent direction with high probability. Empirical results from two applications show superior convergence compared to fixed sample implementations.

  16. Sample size formulae for the Bayesian continual reassessment method.

    Science.gov (United States)

    Cheung, Ying Kuen

    2013-01-01

    In the planning of a dose finding study, a primary design objective is to maintain high accuracy in terms of the probability of selecting the maximum tolerated dose. While numerous dose finding methods have been proposed in the literature, concrete guidance on sample size determination is lacking. With a motivation to provide quick and easy calculations during trial planning, we present closed form formulae for sample size determination associated with the use of the Bayesian continual reassessment method (CRM). We examine the sampling distribution of a nonparametric optimal design and exploit it as a proxy to empirically derive an accuracy index of the CRM using linear regression. We apply the formulae to determine the sample size of a phase I trial of PTEN-long in pancreatic cancer patients and demonstrate that the formulae give results very similar to simulation. The formulae are implemented by an R function 'getn' in the package 'dfcrm'. The results are developed for the Bayesian CRM and should be validated by simulation when used for other dose finding methods. The analytical formulae we propose give quick and accurate approximation of the required sample size for the CRM. The approach used to derive the formulae can be applied to obtain sample size formulae for other dose finding methods.

  17. Uncertainty of the sample size reduction step in pesticide residue analysis of large-sized crops.

    Science.gov (United States)

    Omeroglu, P Yolci; Ambrus, Á; Boyacioglu, D; Majzik, E Solymosne

    2013-01-01

    To estimate the uncertainty of the sample size reduction step, each unit in laboratory samples of papaya and cucumber was cut into four segments in longitudinal directions and two opposite segments were selected for further homogenisation while the other two were discarded. Jackfruit was cut into six segments in longitudinal directions, and all segments were kept for further analysis. To determine the pesticide residue concentrations in each segment, they were individually homogenised and analysed by chromatographic methods. One segment from each unit of the laboratory sample was drawn randomly to obtain 50 theoretical sub-samples with an MS Office Excel macro. The residue concentrations in a sub-sample were calculated from the weight of segments and the corresponding residue concentration. The coefficient of variation calculated from the residue concentrations of 50 sub-samples gave the relative uncertainty resulting from the sample size reduction step. The sample size reduction step, which is performed by selecting one longitudinal segment from each unit of the laboratory sample, resulted in relative uncertainties of 17% and 21% for field-treated jackfruits and cucumber, respectively, and 7% for post-harvest treated papaya. The results demonstrated that sample size reduction is an inevitable source of uncertainty in pesticide residue analysis of large-sized crops. The post-harvest treatment resulted in a lower variability because the dipping process leads to a more uniform residue concentration on the surface of the crops than does the foliar application of pesticides.

  18. Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty.

    Science.gov (United States)

    Anderson, Samantha F; Kelley, Ken; Maxwell, Scott E

    2017-11-01

    The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.

  19. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  20. Survival-time statistics for sample space reducing stochastic processes.

    Science.gov (United States)

    Yadav, Avinash Chand

    2016-04-01

    Stochastic processes wherein the size of the state space is changing as a function of time offer models for the emergence of scale-invariant features observed in complex systems. I consider such a sample-space reducing (SSR) stochastic process that results in a random sequence of strictly decreasing integers {x(t)},0≤t≤τ, with boundary conditions x(0)=N and x(τ) = 1. This model is shown to be exactly solvable: P_{N}(τ), the probability that the process survives for time τ is analytically evaluated. In the limit of large N, the asymptotic form of this probability distribution is Gaussian, with mean and variance both varying logarithmically with system size: 〈τ〉∼lnN and σ_{τ}^{2}∼lnN. Correspondence can be made between survival-time statistics in the SSR process and record statistics of independent and identically distributed random variables.

  1. Sample size considerations for clinical research studies in nuclear cardiology.

    Science.gov (United States)

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  2. Sample size for collecting germplasms–a polyploid model with ...

    Indian Academy of Sciences (India)

    Numerous expressions/results developed for germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to ...

  3. Sample size for collecting germplasms – a polyploid model with ...

    Indian Academy of Sciences (India)

    Unknown

    germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate.

  4. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  5. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  6. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance

    OpenAIRE

    Timothy M Morgan; Case, L. Douglas

    2013-01-01

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time.

  7. Sample Size Determinations for the Two Rater Kappa Statistic.

    Science.gov (United States)

    Flack, Virginia F.; And Others

    1988-01-01

    A method is presented for determining sample size that will achieve a pre-specified bound on confidence interval width for the interrater agreement measure "kappa." The same results can be used when a pre-specified power is desired for testing hypotheses about the value of kappa. (Author/SLD)

  8. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    Science.gov (United States)

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-11-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation.

  9. Mongoloid-Caucasoid Differences in Brain Size from Military Samples.

    Science.gov (United States)

    Rushton, J. Philippe; And Others

    1991-01-01

    Calculation of cranial capacities for the means from 4 Mongoloid and 20 Caucasoid samples (raw data from 57,378 individuals in 1978) found larger brain size for Mongoloids, a finding discussed in evolutionary terms. The conclusion is disputed by L. Willerman but supported by J. P. Rushton. (SLD)

  10. Sample size and power calculation for molecular biology studies.

    Science.gov (United States)

    Jung, Sin-Ho

    2010-01-01

    Sample size calculation is a critical procedure when designing a new biological study. In this chapter, we consider molecular biology studies generating huge dimensional data. Microarray studies are typical examples, so that we state this chapter in terms of gene microarray data, but the discussed methods can be used for design and analysis of any molecular biology studies involving high-dimensional data. In this chapter, we discuss sample size calculation methods for molecular biology studies when the discovery of prognostic molecular markers is performed by accurately controlling false discovery rate (FDR) or family-wise error rate (FWER) in the final data analysis. We limit our discussion to the two-sample case.

  11. Aerosol Sampling Bias from Differential Electrostatic Charge and Particle Size

    Science.gov (United States)

    Jayjock, Michael Anthony

    Lack of reliable epidemiological data on long term health effects of aerosols is due in part to inadequacy of sampling procedures and the attendant doubt regarding the validity of the concentrations measured. Differential particle size has been widely accepted and studied as a major potential biasing effect in the sampling of such aerosols. However, relatively little has been done to study the effect of electrostatic particle charge on aerosol sampling. The objective of this research was to investigate the possible biasing effects of differential electrostatic charge, particle size and their interaction on the sampling accuracy of standard aerosol measuring methodologies. Field studies were first conducted to determine the levels and variability of aerosol particle size and charge at two manufacturing facilities making acrylic powder. The field work showed that the particle mass median aerodynamic diameter (MMAD) varied by almost an order of magnitude (4-34 microns) while the aerosol surface charge was relatively stable (0.6-0.9 micro coulombs/m('2)). The second part of this work was a series of laboratory experiments in which aerosol charge and MMAD were manipulated in a 2('n) factorial design with the percentage of sampling bias for various standard methodologies as the dependent variable. The experiments used the same friable acrylic powder studied in the field work plus two size populations of ground quartz as a nonfriable control. Despite some ill conditioning of the independent variables due to experimental difficulties, statistical analysis has shown aerosol charge (at levels comparable to those measured in workroom air) is capable of having a significant biasing effect. Physical models consistent with the sampling data indicate that the level and bipolarity of the aerosol charge are determining factors in the extent and direction of the bias.

  12. Effects of sample size on KERNEL home range estimates

    Science.gov (United States)

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  13. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

    Directory of Open Access Journals (Sweden)

    Wei Lin Teoh

    Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.

  14. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...

  15. Sample size determination for longitudinal designs with binary response.

    Science.gov (United States)

    Kapur, Kush; Bhaumik, Runa; Tang, X Charlene; Hur, Kwan; Reda, Domenic J; Bhaumik, Dulal K

    2014-09-28

    In this article, we develop appropriate statistical methods for determining the required sample size while comparing the efficacy of an intervention to a control with repeated binary response outcomes. Our proposed methodology incorporates the complexity of the hierarchical nature of underlying designs and provides solutions when varying attrition rates are present over time. We explore how the between-subject variability and attrition rates jointly influence the computation of sample size formula. Our procedure also shows how efficient estimation methods play a crucial role in power analysis. A practical guideline is provided when information regarding individual variance component is unavailable. The validity of our methods is established by extensive simulation studies. Results are illustrated with the help of two randomized clinical trials in the areas of contraception and insomnia. Copyright © 2014 John Wiley & Sons, Ltd.

  16. A power analysis for fidelity measurement sample size determination.

    Science.gov (United States)

    Stokes, Lynne; Allor, Jill H

    2016-03-01

    The importance of assessing fidelity has been emphasized recently with increasingly sophisticated definitions, assessment procedures, and integration of fidelity data into analyses of outcomes. Fidelity is often measured through observation and coding of instructional sessions either live or by video. However, little guidance has been provided about how to determine the number of observations needed to precisely measure fidelity. We propose a practical method for determining a reasonable sample size for fidelity data collection when fidelity assessment requires observation. The proposed methodology is based on consideration of the power of tests of the treatment effect of outcome itself, as well as of the relationship between fidelity and outcome. It makes use of the methodology of probability sampling from a finite population, because the fidelity parameters of interest are estimated over a specific, limited time frame using a sample. For example, consider a fidelity measure defined as the number of minutes of exposure to a treatment curriculum during the 36 weeks of the study. In this case, the finite population is the 36 sessions, the parameter (number of minutes over the entire 36 sessions) is a total, and the sample is the observed sessions. Software for the sample size calculation is provided. (c) 2016 APA, all rights reserved).

  17. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    *E-mail: yeshurun@mail.biu.ac.il. Abstract. Effects of sample size on the second magnetization peak (SMP) in. Bi2Sr2CaCuO8+δ crystals are ... a termination of the measured transition line at Tl, typically 17–20 K (see figure 1). The obscuring and eventual disappearance of the SMP with decreasing tempera- tures has been ...

  18. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    OpenAIRE

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height a...

  19. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. MetSizeR: selecting the optimal sample size for metabolomic studies using an analysis based approach

    OpenAIRE

    Nyamundanda, Gift; Gormley, Isobel Claire; Fan, Yue; Gallagher, William M.; Brennan, Lorraine

    2013-01-01

    Background: Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. Results: In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experime...

  2. Sample size reduction in groundwater surveys via sparse data assimilation

    KAUST Repository

    Hussain, Z.

    2013-04-01

    In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.

  3. It's in the Sample: The Effects of Sample Size and Sample Diversity on the Breadth of Inductive Generalization

    Science.gov (United States)

    Lawson, Chris A.; Fisher, Anna V.

    2011-01-01

    Developmental studies have provided mixed evidence with regard to the question of whether children consider sample size and sample diversity in their inductive generalizations. Results from four experiments with 105 undergraduates, 105 school-age children (M = 7.2 years), and 105 preschoolers (M = 4.9 years) showed that preschoolers made a higher…

  4. Reduced genome size of Helicobacter pylori originating from East Asia.

    Science.gov (United States)

    Dong, Quan-Jiang; Wang, Li-Li; Tian, Zi-Bing; Yu, Xin-Jun; Jia, Sheng-Jiao; Xuan, Shi-Ying

    2014-05-21

    Helicobacter pylori (H. pylori), a major pathogen colonizing the human stomach, shows great genetic variation. Comparative analysis of strains from different H. pylori populations revealed that the genome size of strains from East Asia decreased to 1.60 Mbp, which is significantly smaller than that from Europe or Africa. In parallel with the genome reduction, the number of protein coding genes was decreased, and the guanine-cytosine content was lowered to 38.9%. Elimination of non-essential genes by mutations is likely to be a major cause of the genome reduction. Bacteria with a small genome cost less energy. Thus, H. pylori strains from East Asia may have proliferation and growth advantages over those from Western countries. This could result in enhanced capacity of bacterial spreading. Therefore, the reduced genome size potentially contributes to the high prevalence of H. pylori in East Asia.

  5. Red maca (Lepidium meyenii reduced prostate size in rats

    Directory of Open Access Journals (Sweden)

    Rubio Julio

    2005-01-01

    Full Text Available Abstract Background Epidemiological studies have found that consumption of cruciferous vegetables is associated with a reduced risk of prostate cancer. This effect seems to be due to aromatic glucosinolate content. Glucosinolates are known for have both antiproliferative and proapoptotic actions. Maca is a cruciferous cultivated in the highlands of Peru. The absolute content of glucosinolates in Maca hypocotyls is relatively higher than that reported in other cruciferous crops. Therefore, Maca may have proapoptotic and anti-proliferative effects in the prostate. Methods Male rats treated with or without aqueous extracts of three ecotypes of Maca (Yellow, Black and Red were analyzed to determine the effect on ventral prostate weight, epithelial height and duct luminal area. Effects on serum testosterone (T and estradiol (E2 levels were also assessed. Besides, the effect of Red Maca on prostate was analyzed in rats treated with testosterone enanthate (TE. Results Red Maca but neither Yellow nor Black Maca reduced significantly ventral prostate size in rats. Serum T or E2 levels were not affected by any of the ecotypes of Maca assessed. Red Maca also prevented the prostate weight increase induced by TE treatment. Red Maca administered for 42 days reduced ventral prostatic epithelial height. TE increased ventral prostatic epithelial height and duct luminal area. These increases by TE were reduced after treatment with Red Maca for 42 days. Histology pictures in rats treated with Red Maca plus TE were similar to controls. Phytochemical screening showed that aqueous extract of Red Maca has alkaloids, steroids, tannins, saponins, and cardiotonic glycosides. The IR spectra of the three ecotypes of Maca in 3800-650 cm (-1 region had 7 peaks representing 7 functional chemical groups. Highest peak values were observed for Red Maca, intermediate values for Yellow Maca and low values for Black Maca. These functional groups correspond among others to benzyl

  6. Red maca (Lepidium meyenii) reduced prostate size in rats

    Science.gov (United States)

    Gonzales, Gustavo F; Miranda, Sara; Nieto, Jessica; Fernández, Gilma; Yucra, Sandra; Rubio, Julio; Yi, Pedro; Gasco, Manuel

    2005-01-01

    Background Epidemiological studies have found that consumption of cruciferous vegetables is associated with a reduced risk of prostate cancer. This effect seems to be due to aromatic glucosinolate content. Glucosinolates are known for have both antiproliferative and proapoptotic actions. Maca is a cruciferous cultivated in the highlands of Peru. The absolute content of glucosinolates in Maca hypocotyls is relatively higher than that reported in other cruciferous crops. Therefore, Maca may have proapoptotic and anti-proliferative effects in the prostate. Methods Male rats treated with or without aqueous extracts of three ecotypes of Maca (Yellow, Black and Red) were analyzed to determine the effect on ventral prostate weight, epithelial height and duct luminal area. Effects on serum testosterone (T) and estradiol (E2) levels were also assessed. Besides, the effect of Red Maca on prostate was analyzed in rats treated with testosterone enanthate (TE). Results Red Maca but neither Yellow nor Black Maca reduced significantly ventral prostate size in rats. Serum T or E2 levels were not affected by any of the ecotypes of Maca assessed. Red Maca also prevented the prostate weight increase induced by TE treatment. Red Maca administered for 42 days reduced ventral prostatic epithelial height. TE increased ventral prostatic epithelial height and duct luminal area. These increases by TE were reduced after treatment with Red Maca for 42 days. Histology pictures in rats treated with Red Maca plus TE were similar to controls. Phytochemical screening showed that aqueous extract of Red Maca has alkaloids, steroids, tannins, saponins, and cardiotonic glycosides. The IR spectra of the three ecotypes of Maca in 3800-650 cm (-1) region had 7 peaks representing 7 functional chemical groups. Highest peak values were observed for Red Maca, intermediate values for Yellow Maca and low values for Black Maca. These functional groups correspond among others to benzyl glucosinolate. Conclusions

  7. Variance estimation, design effects, and sample size calculations for respondent-driven sampling.

    Science.gov (United States)

    Salganik, Matthew J

    2006-11-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling.

  8. Sample size requirement in analytical studies for similarity assessment.

    Science.gov (United States)

    Chow, Shein-Chung; Song, Fuyu; Bai, He

    2017-01-01

    For the assessment of biosimilar products, the FDA recommends a stepwise approach for obtaining the totality-of-the-evidence for assessing biosimilarity between a proposed biosimilar product and its corresponding innovative biologic product. The stepwise approach starts with analytical studies for assessing similarity in critical quality attributes (CQAs), which are relevant to clinical outcomes at various stages of the manufacturing process. For CQAs that are the most relevant to clinical outcomes, the FDA requires an equivalence test be performed for similarity assessment based on an equivalence acceptance criterion (EAC) that is obtained using a single test value of some selected reference lots. In practice, we often have extremely imbalanced numbers of reference and test lots available for the establishment of EAC. In this case, to assist the sponsors, the FDA proposed an idea for determining the number of reference lots and the number of test lots required in order not to have imbalanced sample sizes when establishing EAC for the equivalence test based on extensive simulation studies. Along this line, this article not only provides statistical justification of Dong, Tsong, and Weng's proposal, but also proposes an alternative method for sample size requirement for the Tier 1 equivalence test.

  9. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  10. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    Science.gov (United States)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by

  11. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  12. A simulation study provided sample size guidance for differential item functioning (DIF) studies using short scales.

    Science.gov (United States)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Gundy, Chad; Koller, Michael; Petersen, Morten A; Sprangers, Mirjam A G

    2009-03-01

    Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal logistic regression. Simulated data, representative of HRQoL scales with four-category items, were generated. The power and type I error rates of the DIF method were then investigated when, respectively, DIF was deliberately introduced and when no DIF was added. The sample size, scale length, floor effects (FEs) and significance level were varied. When there was no DIF, type I error rates were close to 5%. Detecting moderate uniform DIF in a two-item scale required a sample size of 300 per group for adequate (>80%) power. For longer scales, a sample size of 200 was adequate. Considerably larger sample sizes were required to detect nonuniform DIF, when there were extreme FEs or when a reduced type I error rate was required. The impact of the number of items in the scale was relatively small. Ordinal logistic regression successfully detects DIF for HRQoL instruments with short scales. Sample size guidelines are provided.

  13. Differential Responses of Nitrate Reducer Community Size, Structure, and Activity to Tillage Systems▿ †

    Science.gov (United States)

    Chèneby, D.; Brauman, A.; Rabary, B.; Philippot, L.

    2009-01-01

    The main objective of this study was to determine how the size, structure, and activity of the nitrate reducer community were affected by adoption of a conservative tillage system as an alternative to conventional tillage. The experimental field, established in Madagascar in 1991, consists of plots subjected to conventional tillage or direct-seeding mulch-based cropping systems (DM), both amended with three different fertilization regimes. Comparisons of size, structure, and activity of the nitrate reducer community in samples collected from the top layer in 2005 and 2006 revealed that all characteristics of this functional community were affected by the tillage system, with increased nitrate reduction activity and numbers of nitrate reducers under DM. Nitrate reduction activity was also stimulated by combined organic and mineral fertilization but not by organic fertilization alone. In contrast, both negative and positive effects of combined organic and mineral fertilization on the size of the nitrate reducer community were observed. The size of the nitrate reducer community was a significant predictor of the nitrate reduction rates except in one treatment, which highlighted the inherent complexities in understanding the relationships the between size, diversity, and structure of functional microbial communities along environmental gradients. PMID:19304827

  14. Sample Size of One: Operational Qualitative Analysis in the Classroom

    Directory of Open Access Journals (Sweden)

    John Hoven

    2015-10-01

    Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.

  15. Tunable Reduced Size Planar Folded Slot Antenna Utilizing Varactor Diodes

    Science.gov (United States)

    Scardelletti, Maximilian C.; Ponchak, George E.; Jordan, Jennifer L.; Jastram, Nathan; Mahaffey, Joshua V.

    2010-01-01

    A tunable folded slot antenna that utilizes varactor diodes is presented. The antenna is fabricated on Rogers 6006 Duriod with a dielectric constant and thickness of 6.15 and 635 m, respectively. A copper cladding layer of 17 m defines the antenna on the top side (no ground on backside). The antenna is fed with a CPW 50 (Omega) feed line, has a center frequency of 3 GHz, and incorporates Micrometrics microwave hyper-abrupt 500MHV varactors to tune the resonant frequency. The varactors have a capacitance range of 2.52 pF at 0 V to 0.4 pF at 20 V; they are placed across the radiating slot of the antenna. The tunable 10 dB bandwidth of the 3 GHz antenna is 150 MHz. The varactors also reduce the size of the antenna by 30% by capacitively loading the resonating slot line. At the center frequency, 3 GHz, the antenna has a measured return loss of 44 dB and a gain of 1.6 dBi. Full-wave electromagnetic simulations using HFSS are presented that validate the measured data. Index Terms capacitive loading, Duriod, folded slot antenna, varactor.

  16. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    /CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...

  17. Determining optimal sample sizes for multi-stage randomized clinical trials using value of information methods.

    Science.gov (United States)

    Willan, Andrew; Kowgier, Matthew

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.

  18. Implications of sampling design and sample size for national carbon accounting systems.

    Science.gov (United States)

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  19. Sample size calculations for evaluating treatment policies in multi-stage designs.

    Science.gov (United States)

    Dawson, Ree; Lavori, Philip W

    2010-12-01

    Sequential multiple assignment randomized (SMAR) designs are used to evaluate treatment policies, also known as adaptive treatment strategies (ATS). The determination of SMAR sample sizes is challenging because of the sequential and adaptive nature of ATS, and the multi-stage randomized assignment used to evaluate them. We derive sample size formulae appropriate for the nested structure of successive SMAR randomizations. This nesting gives rise to ATS that have overlapping data, and hence between-strategy covariance. We focus on the case when covariance is substantial enough to reduce sample size through improved inferential efficiency. Our design calculations draw upon two distinct methodologies for SMAR trials, using the equality of the optimal semi-parametric and Bayesian predictive estimators of standard error. This 'hybrid' approach produces a generalization of the t-test power calculation that is carried out in terms of effect size and regression quantities familiar to the trialist. Simulation studies support the reasonableness of underlying assumptions as well as the adequacy of the approximation to between-strategy covariance when it is substantial. Investigation of the sensitivity of formulae to misspecification shows that the greatest influence is due to changes in effect size, which is an a priori clinical judgment on the part of the trialist. We have restricted simulation investigation to SMAR studies of two and three stages, although the methods are fully general in that they apply to 'K-stage' trials. Practical guidance is needed to allow the trialist to size a SMAR design using the derived methods. To this end, we define ATS to be 'distinct' when they differ by at least the (minimal) size of effect deemed to be clinically relevant. Simulation results suggest that the number of subjects needed to distinguish distinct strategies will be significantly reduced by adjustment for covariance only when small effects are of interest.

  20. Threshold-dependent sample sizes for selenium assessment with stream fish tissue.

    Science.gov (United States)

    Hitt, Nathaniel P; Smith, David R

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α=0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites

  1. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  2. 40 CFR 761.243 - Standard wipe sample method and size.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Standard wipe sample method and size... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas...

  3. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  4. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  5. Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas

    2013-01-01

    sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size......This paper proposes a method that reduces the computational complexity of signal reconstruction in single-channel nonuniform sampling while acquiring frequency sparse multi-band signals. Generally, this compressed sensing based signal acquisition allows a decrease in the sampling rate of frequency...... of the measurement matrix and the number of iterations of the reconstruction algorithm in comparison to the direct single-channel approach. We consider an orthogonal matching pursuit reconstruction algorithm for single-channel sampling and its modification for multi-coset sampling. Theoretical as well as numerical...

  6. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  7. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  8. (Sample) size matters! An examination of sample size from the SPRINT trial study to prospectively evaluate reamed intramedullary nails in patients with tibial fractures

    NARCIS (Netherlands)

    Bhandari, Mohit; Tornetta, Paul; Rampersad, Shelly-Ann; Sprague, Sheila; Heels-Ansdell, Diane; Sanders, David W.; Schemitsch, Emil H.; Swiontkowski, Marc; Walter, Stephen; Guyatt, Gordon; Buckingham, Lisa; Leece, Pamela; Viveiros, Helena; Mignott, Tashay; Ansell, Natalie; Sidorkewicz, Natalie; Agel, Julie; Bombardier, Claire; Berlin, Jesse A.; Bosse, Michael; Browner, Bruce; Gillespie, Brenda; O'Brien, Peter; Poolman, Rudolf; Macleod, Mark D.; Carey, Timothy; Leitch, Kellie; Bailey, Stuart; Gurr, Kevin; Konito, Ken; Bartha, Charlene; Low, Isolina; MacBean, Leila V.; Ramu, Mala; Reiber, Susan; Strapp, Ruth; Tieszer, Christina; Kreder, Hans; Stephen, David J. G.; Axelrod, Terry S.; Yee, Albert J. M.; Richards, Robin R.; Finkelstein, Joel; Holtby, Richard M.; Cameron, Hugh; Cameron, John; Gofton, Wade; Murnaghan, John; Schatztker, Joseph; Bulmer, Beverly; Conlan, Lisa; Laflamme, Yves; Berry, Gregory; Beaumont, Pierre; Ranger, Pierre; Laflamme, Georges-Henri; Jodoin, Alain; Renaud, Eric; Gagnon, Sylvain; Maurais, Gilles; Malo, Michel; Fernandes, Julio; Latendresse, Kim; Poirier, Marie-France; Daigneault, Gina; McKee, Michael M.; Waddell, James P.; Bogoch, Earl R.; Daniels, Timothy R.; McBroom, Robert R.; Vicente, Milena R.; Storey, Wendy; Wild, Lisa M.; McCormack, Robert; Perey, Bertrand; Goetz, Thomas J.; Pate, Graham; Penner, Murray J.; Panagiotopoulos, Kostas; Pirani, Shafique; Dommisse, Ian G.; Loomer, Richard L.; Stone, Trevor; Moon, Karyn; Zomar, Mauri; Webb, Lawrence X.; Teasdall, Robert D.; Birkedal, John Peter; Martin, David Franklin; Ruch, David S.; Kilgus, Douglas J.; Pollock, David C.; Harris, Mitchel Brion; Wiesler, Ethan Ron; Ward, William G.; Shilt, Jeffrey Scott; Koman, Andrew L.; Poehling, Gary G.; Kulp, Brenda; Creevy, William R.; Stein, Andrew B.; Bono, Christopher T.; Einhorn, Thomas A.; Brown, T. Desmond; Pacicca, Donna; Sledge, John B.; Foster, Timothy E.; Voloshin, Ilva; Bolton, Jill; Carlisle, Hope; Shaughnessy, Lisa; Ombremsky, William T.; LeCroy, C. Michael; Meinberg, Eric G.; Messer, Terry M.; Craig, William L.; Dirschl, Douglas R.; Caudle, Robert; Harris, Tim; Elhert, Kurt; Hage, William; Jones, Robert; Piedrahita, Luis; Schricker, Paul O.; Driver, Robin; Godwin, Jean; Hansley, Gloria; Obremskey, William Todd; Kregor, Philip James; Tennent, Gregory; Truchan, Lisa M.; Sciadini, Marcus; Shuler, Franklin D.; Driver, Robin E.; Nading, Mary Alice; Neiderstadt, Jacky; Vap, Alexander R.; Vallier, Heather A.; Patterson, Brendan M.; Wilber, John H.; Wilber, Roger G.; Sontich, John K.; Moore, Timothy Alan; Brady, Drew; Cooperman, Daniel R.; Davis, John A.; Cureton, Beth Ann; Mandel, Scott; Orr, R. Douglas; Sadler, John T. S.; Hussain, Tousief; Rajaratnam, Krishan; Petrisor, Bradley; Drew, Brian; Bednar, Drew A.; Kwok, Desmond C. H.; Pettit, Shirley; Hancock, Jill; Cole, Peter A.; Smith, Joel J.; Brown, Gregory A.; Lange, Thomas A.; Stark, John G.; Levy, Bruce; Swiontkowski, Marc F.; Garaghty, Mary J.; Salzman, Joshua G.; Schutte, Carol A.; Tastad, Linda Toddie; Vang, Sandy; Seligson, David; Roberts, Craig S.; Malkani, Arthur L.; Sanders, Laura; Gregory, Sharon Allen; Dyer, Carmen; Heinsen, Jessica; Smith, Langan; Madanagopal, Sudhakar; Coupe, Kevin J.; Tucker, Jeffrey J.; Criswell, Allen R.; Buckle, Rosemary; Rechter, Alan Jeffrey; Sheth, Dhiren Shaskikant; Urquart, Brad; Trotscher, Thea; Anders, Mark J.; Kowalski, Joseph M.; Fineberg, Marc S.; Bone, Lawrence B.; Phillips, Matthew J.; Rohrbacher, Bernard; Stegemann, Philip; Mihalko, William M.; Buyea, Cathy; Augustine, Stephen J.; Jackson, William Thomas; Solis, Gregory; Ero, Sunday U.; Segina, Daniel N.; Berrey, Hudson B.; Agnew, Samuel G.; Fitzpatrick, Michael; Campbell, Lakina C.; Derting, Lynn; McAdams, June; Goslings, J. Carel; Ponsen, Kees Jan; Luitse, Jan; Kloen, Peter; Joosse, Pieter; Winkelhagen, Jasper; Duivenvoorden, Raphaël; Teague, David C.; Davey, Joseph; Sullivan, J. Andy; Ertl, William J. J.; Puckett, Timothy A.; Pasque, Charles B.; Tompkins, John F.; Gruel, Curtis R.; Kammerlocher, Paul; Lehman, Thomas P.; Puffinbarger, William R.; Carl, Kathy L.; Weber, Donald W.; Jomha, Nadr M.; Goplen, Gordon R.; Masson, Edward; Beaupre, Lauren A.; Greaves, Karen E.; Schaump, Lori N.; Jeray, Kyle J.; Goetz, David R.; Westberry, Davd E.; Broderick, J. Scott; Moon, Bryan S.; Tanner, Stephanie L.; Powell, James N.; Buckley, Richard E.; Elves, Leslie; Connolly, Stephen; Abraham, Edward P.; Eastwood, Donna; Steele, Trudy; Ellis, Thomas; Herzberg, Alex; Brown, George A.; Crawford, Dennis E.; Hart, Robert; Hayden, James; Orfaly, Robert M.; Vigland, Theodore; Vivekaraj, Maharani; Bundy, Gina L.; Miclau, Theodore; Matityahu, Amir; Coughlin, R. Richard; Kandemir, Utku; McClellan, R. Trigg; Lin, Cindy Hsin-Hua; Karges, David; Cramer, Kathryn; Watson, J. Tracy; Moed, Berton; Scott, Barbara; Beck, Dennis J.; Orth, Carolyn; Puskas, David; Clark, Russell; Jones, Jennifer; Egol, Kenneth A.; Paksima, Nader; France, Monet; Wai, Eugene K.; Johnson, Garth; Wilkinson, Ross; Gruszczynski, Adam T.; Vexler, Liisa

    2013-01-01

    Inadequate sample size and power in randomized trials can result in misleading findings. This study demonstrates the effect of sample size in a large clinical trial by evaluating the results of the Study to Prospectively evaluate Reamed Intramedullary Nails in Patients with Tibial fractures (SPRINT)

  9. Size variation in samples of fossil and recent murid teeth

    NARCIS (Netherlands)

    Freudenthal, M.; Martín Suárez, E.

    1990-01-01

    The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.

  10. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    Science.gov (United States)

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  11. The effects of focused transducer geometry and sample size on the measurement of ultrasonic transmission properties

    Science.gov (United States)

    Atkins, T. J.; Humphrey, V. F.; Duck, F. A.; Tooley, M. A.

    2011-02-01

    The response of two coaxially aligned weakly focused ultrasonic transducers, typical of those employed for measuring the attenuation of small samples using the immersion method, has been investigated. The effects of the sample size on transmission measurements have been analyzed by integrating the sound pressure distribution functions of the radiator and receiver over different limits to determine the size of the region that contributes to the system response. The results enable the errors introduced into measurements of attenuation to be estimated as a function of sample size. A theoretical expression has been used to examine how the transducer separation affects the receiver output. The calculations are compared with an experimental study of the axial response of three unpaired transducers in water. The separation of each transducer pair giving the maximum response was determined, and compared with the field characteristics of the individual transducers. The optimum transducer separation, for accurate estimation of sample properties, was found to fall between the sum of the focal distances and the sum of the geometric focal lengths as this reduced diffraction errors.

  12. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    Science.gov (United States)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  13. Practical Approaches For Determination Of Sample Size In Paired Case-Control Studies

    OpenAIRE

    Demirel, Neslihan; Ozlem EGE ORUC; Gurler, Selma

    2016-01-01

    Objective: Cross-over design or paired case control studies that are using in clinical studies are the methods of design of experiments which requires dependent samples. The problem of sample size determination is generally difficult step of planning the statistical design. The aim of this study is to provide the researchers a practical approach for determining the sample size in paired control studies. Material and Methods: In this study, determination of sample size is mentioned in detail i...

  14. Limitations of mRNA amplification from small-size cell samples

    Directory of Open Access Journals (Sweden)

    Myklebost Ola

    2005-10-01

    Full Text Available Abstract Background Global mRNA amplification has become a widely used approach to obtain gene expression profiles from limited material. An important concern is the reliable reflection of the starting material in the results obtained. This is especially important with extremely low quantities of input RNA where stochastic effects due to template dilution may be present. This aspect remains under-documented in the literature, as quantitative measures of data reliability are most often lacking. To address this issue, we examined the sensitivity levels of each transcript in 3 different cell sample sizes. ANOVA analysis was used to estimate the overall effects of reduced input RNA in our experimental design. In order to estimate the validity of decreasing sample sizes, we examined the sensitivity levels of each transcript by applying a novel model-based method, TransCount. Results From expression data, TransCount provided estimates of absolute transcript concentrations in each examined sample. The results from TransCount were used to calculate the Pearson correlation coefficient between transcript concentrations for different sample sizes. The correlations were clearly transcript copy number dependent. A critical level was observed where stochastic fluctuations became significant. The analysis allowed us to pinpoint the gene specific number of transcript templates that defined the limit of reliability with respect to number of cells from that particular source. In the sample amplifying from 1000 cells, transcripts expressed with at least 121 transcripts/cell were statistically reliable and for 250 cells, the limit was 1806 transcripts/cell. Above these thresholds, correlation between our data sets was at acceptable values for reliable interpretation. Conclusion These results imply that the reliability of any amplification experiment must be validated empirically to justify that any gene exists in sufficient quantity in the input material. This

  15. Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !

    NARCIS (Netherlands)

    van Breukelen, Gerard J.P.; Candel, Math J.J.M.

    2012-01-01

    Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given

  16. High Speed Gear Sized and Configured to Reduce Windage Loss

    Science.gov (United States)

    Kunz, Robert F. (Inventor); Medvitz, Richard B. (Inventor); Hill, Matthew John (Inventor)

    2013-01-01

    A gear and drive system utilizing the gear include teeth. Each of the teeth has a first side and a second side opposite the first side that extends from a body of the gear. For each tooth of the gear, a first extended portion is attached to the first side of the tooth to divert flow of fluid adjacent to the body of the gear to reduce windage losses that occur when the gear rotates. The gear may be utilized in drive systems that may have high rotational speeds, such as speeds where the tip velocities are greater than or equal to about 68 m/s. Some embodiments of the gear may also utilize teeth that also have second extended portions attached to the second sides of the teeth to divert flow of fluid adjacent to the body of the gear to reduce windage losses that occur when the gear rotates.

  17. How many is enough? Determining optimal sample sizes for normative studies in pediatric neuropsychology.

    Science.gov (United States)

    Bridges, Ana J; Holler, Karen A

    2007-11-01

    The purpose of this investigation was to determine how confidence intervals (CIs) for pediatric neuropsychological norms vary as a function of sample size, and to determine optimal sample sizes for normative studies. First, the authors calculated 95% CIs for a set of published pediatric norms for four commonly used neuropsychological instruments. Second, 95% CIs were calculated for varying sample size (from n = 5 to n = 500). Results suggest that some pediatric norms have unacceptably wide CIs, and normative studies ought optimally to use 50 to 75 participants per cell. Smaller sample sizes may lead to overpathologizing results, while the cost of obtaining larger samples may not be justifiable.

  18. Testing new submersible pumps for proper sizing and reduced costs

    Energy Technology Data Exchange (ETDEWEB)

    O' Toole, W.P.; O' Brien, J.B.

    1989-02-01

    This paper describes an ongoing program to improve overall submersible pump performance by Thums Long Beach Co., acting as contractor for the City of Long Beach, operator of the Long Beach Unit. Thums Long Beach Co. currently operates 700 submersible pump installations located on four manmade islands and one landfill pier location. The program began with spot testing of submersible pumps for Thums' use. It has evolved to 100% pump testing and the stipulation that only pumps with newly manufactured parts are acceptable. The primary goals of this program are to increase well production and to lower lifting costs. Critical to these goals is increasing the average length of run by using accurate pump-performance data to design equipment and by rejecting defective pumps before they are run. Increased production is realized from better designs. Lower lifting costs result from using more efficient pumps and a reduced frequency of pulling submersible equipment.

  19. Testing new submersible pumps for proper sizing and reduced costs

    Energy Technology Data Exchange (ETDEWEB)

    O' Toole, W.P.; O' Brien, J.B.

    1986-01-01

    This paper describes an ongoing program to improve overall submersible pump performance by Thums Long Beach Company, acting as Contractor of the City of Long Beach, Operator of the Long Beach Unit. Thums Long Beach Company currently operates 700 submersible pump installations located on four man-made islands and one land fill pier location. The program began with spot testing of submersible pumps for Thums' use. It has evolved to 100 percent pump testing and the stipulation that only pumps with newly manufactured parts are acceptable. The primary goals of this program are to increase well production and lower lifting costs. Critical to these goals is increasing the average length of run by using accurate pump performance data to design equipment and by rejecting defective pumps before they are run. Increased production is realized from better designs. Lower lifting costs result from utilizing higher efficiency pumps and a reduced frequency of pulling submersible equipment.

  20. Sample size and power determination when limited preliminary information is available

    Directory of Open Access Journals (Sweden)

    Christine E. McLaren

    2017-04-01

    Full Text Available Abstract Background We describe a novel strategy for power and sample size determination developed for studies utilizing investigational technologies with limited available preliminary data, specifically of imaging biomarkers. We evaluated diffuse optical spectroscopic imaging (DOSI, an experimental noninvasive imaging technique that may be capable of assessing changes in mammographic density. Because there is significant evidence that tamoxifen treatment is more effective at reducing breast cancer risk when accompanied by a reduction of breast density, we designed a study to assess the changes from baseline in DOSI imaging biomarkers that may reflect fluctuations in breast density in premenopausal women receiving tamoxifen. Method While preliminary data demonstrate that DOSI is sensitive to mammographic density in women about to receive neoadjuvant chemotherapy for breast cancer, there is no information on DOSI in tamoxifen treatment. Since the relationship between magnetic resonance imaging (MRI and DOSI has been established in previous studies, we developed a statistical simulation approach utilizing information from an investigation of MRI assessment of breast density in 16 women before and after treatment with tamoxifen to estimate the changes in DOSI biomarkers due to tamoxifen. Results Three sets of 10,000 pairs of MRI breast density data with correlation coefficients of 0.5, 0.8 and 0.9 were simulated and generated and were used to simulate and generate a corresponding 5,000,000 pairs of DOSI values representing water, ctHHB, and lipid. Minimum sample sizes needed per group for specified clinically-relevant effect sizes were obtained. Conclusion The simulation techniques we describe can be applied in studies of other experimental technologies to obtain the important preliminary data to inform the power and sample size calculations.

  1. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    OpenAIRE

    Wolf, Erika J.; Harrington, Kelly M.; Shaunna L Clark; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb. This study used Monte Carlo data simulation techniques to evaluate sample size requirements for common applied SEMs. Across a series of simulations, we...

  2. Bayesian sample size determination for a clinical trial with correlated continuous and binary outcomes.

    Science.gov (United States)

    Stamey, James D; Natanegara, Fanni; Seaman, John W

    2013-01-01

    In clinical trials, multiple outcomes are often collected in order to simultaneously assess effectiveness and safety. We develop a Bayesian procedure for determining the required sample size in a regression model where a continuous efficacy variable and a binary safety variable are observed. The sample size determination procedure is simulation based. The model accounts for correlation between the two variables. Through examples we demonstrate that savings in total sample size are possible when the correlation between these two variables is sufficiently high.

  3. Issues of sample size in sensitivity and specificity analysis with special reference to oncology

    Directory of Open Access Journals (Sweden)

    Atul Juneja

    2015-01-01

    Full Text Available Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.

  4. Issues of sample size in sensitivity and specificity analysis with special reference to oncology.

    Science.gov (United States)

    Juneja, Atul; Sharma, Shashi

    2015-01-01

    Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.

  5. Sample Size for Measuring Grammaticality in Preschool Children from Picture-Elicited Language Samples

    Science.gov (United States)

    Eisenberg, Sarita L.; Guo, Ling-Yu

    2015-01-01

    Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…

  6. Randomized controlled trials 5: Determining the sample size and power for clinical trials and cohort studies.

    Science.gov (United States)

    Greene, Tom

    2015-01-01

    Performing well-powered randomized controlled trials is of fundamental importance in clinical research. The goal of sample size calculations is to assure that statistical power is acceptable while maintaining a small probability of a type I error. This chapter overviews the fundamentals of sample size calculation for standard types of outcomes for two-group studies. It considers (1) the problems of determining the size of the treatment effect that the studies will be designed to detect, (2) the modifications to sample size calculations to account for loss to follow-up and nonadherence, (3) the options when initial calculations indicate that the feasible sample size is insufficient to provide adequate power, and (4) the implication of using multiple primary endpoints. Sample size estimates for longitudinal cohort studies must take account of confounding by baseline factors.

  7. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  8. Relative power and sample size analysis on gene expression profiling data

    Directory of Open Access Journals (Sweden)

    den Dunnen JT

    2009-09-01

    Full Text Available Abstract Background With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, preparation and processing. Objective measures that help experimental design, given own pilot data, are thus fundamental. Results Relative power and sample size analysis were performed on two distinct data sets. The first set consisted of Affymetrix array data derived from a nutrigenomics experiment in which weak, intermediate and strong PPARα agonists were administered to wild-type and PPARα-null mice. Our analysis confirms the hierarchy of PPARα-activating compounds previously reported and the general idea that larger effect sizes positively contribute to the average power of the experiment. A simulation experiment was performed that mimicked the effect sizes seen in the first data set. The relative power was predicted but the estimates were slightly conservative. The second, more challenging, data set describes a microarray platform comparison study using hippocampal δC-doublecortin-like kinase transgenic mice that were compared to wild-type mice, which was combined with results from Solexa/Illumina deep sequencing runs. As expected, the choice of technology greatly influences the performance of the experiment. Solexa/Illumina deep sequencing has the highest overall power followed by the microarray platforms Agilent and Affymetrix. Interestingly, Solexa/Illumina deep sequencing displays comparable power across all intensity ranges, in contrast with microarray platforms that have decreased power in the low intensity range due to background noise. This means that deep sequencing technology is especially more powerful in detecting differences in the low intensity range, compared to microarray platforms. Conclusion Power and sample size analysis

  9. Relative power and sample size analysis on gene expression profiling data

    Science.gov (United States)

    van Iterson, M; 't Hoen, PAC; Pedotti, P; Hooiveld, GJEJ; den Dunnen, JT; van Ommen, GJB; Boer, JM; Menezes, RX

    2009-01-01

    Background With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, preparation and processing. Objective measures that help experimental design, given own pilot data, are thus fundamental. Results Relative power and sample size analysis were performed on two distinct data sets. The first set consisted of Affymetrix array data derived from a nutrigenomics experiment in which weak, intermediate and strong PPARα agonists were administered to wild-type and PPARα-null mice. Our analysis confirms the hierarchy of PPARα-activating compounds previously reported and the general idea that larger effect sizes positively contribute to the average power of the experiment. A simulation experiment was performed that mimicked the effect sizes seen in the first data set. The relative power was predicted but the estimates were slightly conservative. The second, more challenging, data set describes a microarray platform comparison study using hippocampal δC-doublecortin-like kinase transgenic mice that were compared to wild-type mice, which was combined with results from Solexa/Illumina deep sequencing runs. As expected, the choice of technology greatly influences the performance of the experiment. Solexa/Illumina deep sequencing has the highest overall power followed by the microarray platforms Agilent and Affymetrix. Interestingly, Solexa/Illumina deep sequencing displays comparable power across all intensity ranges, in contrast with microarray platforms that have decreased power in the low intensity range due to background noise. This means that deep sequencing technology is especially more powerful in detecting differences in the low intensity range, compared to microarray platforms. Conclusion Power and sample size analysis based on pilot data give

  10. Sizing Optimization and Strength Analysis for Spread-type Gear Reducers

    Directory of Open Access Journals (Sweden)

    Wei-Hsuan Hsu

    2014-08-01

    Full Text Available A reducer is now developed towards the trend of customization service and cost-saving. In this study, a sizing program for the reducer has been developed in order to replace the manual sizing process. We aim at the total center distance of the gear reducer for optimization to reduce gear volume and weight. Also, we checked constrains such as, tooth root bending, tooth contact strength, gear shaft endangered cross-section, bearing life, gear shaft deflection, and torsion angle deformation, etc., to obtain reliable drive strength. Comparisons of sizes and weights before and after optimization confirm that the purpose for reducing production cost is achieved.

  11. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Science.gov (United States)

    2010-10-01

    ... applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than... Using Finite Population Correction The FPC is not applied when the sample is drawn from a population of... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up...

  12. Sample size calculations for pilot randomized trials: a confidence interval approach.

    Science.gov (United States)

    Cocks, Kim; Torgerson, David J

    2013-02-01

    To describe a method using confidence intervals (CIs) to estimate the sample size for a pilot randomized trial. Using one-sided CIs and the estimated effect size that would be sought in a large trial, we calculated the sample size needed for pilot trials. Using an 80% one-sided CI, we estimated that a pilot trial should have at least 9% of the sample size of the main planned trial. Using the estimated effect size difference for the main trial and using a one-sided CI, this allows us to calculate a sample size for a pilot trial, which will make its results more useful than at present. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Particle size distribution and chemical composition of total mixed rations for dairy cattle: water addition and feed sampling effects.

    Science.gov (United States)

    Arzola-Alvarez, C; Bocanegra-Viezca, J A; Murphy, M R; Salinas-Chavira, J; Corral-Luna, A; Romanos, A; Ruíz-Barrera, O; Rodríguez-Muela, C

    2010-09-01

    Four dairy farms were used to determine the effects of water addition to diets and sample collection location on the particle size distribution and chemical composition of total mixed rations (TMR). Samples were collected weekly from the mixing wagon and from 3 locations in the feed bunk (top, middle, and bottom) for 5 mo (April, May, July, August, and October). Samples were partially dried to determine the effect of moisture on particle size distribution. Particle size distribution was measured using the Penn State Particle Size Separator. Crude protein, neutral detergent fiber, and acid detergent fiber contents were also analyzed. Particle fractions 19 to 8, 8 to 1.18, and 19 mm was greater than recommended for TMR, according to the guidelines of Cooperative Extension of Pennsylvania State University. The particle size distribution in April differed from that in October, but intermediate months (May, July, and August) had similar particle size distributions. Samples from the bottom of the feed bunk had the highest percentage of particles retained on the 19-mm sieve. Samples from the top and middle of the feed bunk were similar to that from the mixing wagon. Higher percentages of particles were retained on >19, 19 to 8, and 8 to 1.18 mm sieves for wet than dried samples. The reverse was found for particles passing the 1.18-mm sieve. Mean particle size was higher for wet than dried samples. The crude protein, neutral detergent fiber, and acid detergent fiber contents of TMR varied with month of sampling (18-21, 40-57, and 21-34%, respectively) but were within recommended ranges for high-yielding dairy cows. Analyses of TMR particle size distributions are useful for proper feed bunk management and formulation of diets that maintain rumen function and maximize milk production and quality. Water addition may help reduce dust associated with feeding TMR. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    Interestingly, the dislocation plasticity of the single- crystal AlN strongly depends on specimen sizes. As shown in Fig. 5a and b, the large plastic...ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to-Ductile Transition of Single- Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to-Ductile Transition of Single- Crystal

  15. Not too big, not too small: a goldilocks approach to sample size selection.

    Science.gov (United States)

    Broglio, Kristine R; Connor, Jason T; Berry, Scott M

    2014-01-01

    We present a Bayesian adaptive design for a confirmatory trial to select a trial's sample size based on accumulating data. During accrual, frequent sample size selection analyses are made and predictive probabilities are used to determine whether the current sample size is sufficient or whether continuing accrual would be futile. The algorithm explicitly accounts for complete follow-up of all patients before the primary analysis is conducted. We refer to this as a Goldilocks trial design, as it is constantly asking the question, "Is the sample size too big, too small, or just right?" We describe the adaptive sample size algorithm, describe how the design parameters should be chosen, and show examples for dichotomous and time-to-event endpoints.

  16. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    Science.gov (United States)

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  17. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2017-09-27

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Strategies for informed sample size reduction in adaptive controlled clinical trials

    Science.gov (United States)

    Arandjelović, Ognjen

    2017-12-01

    Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.

  19. Clinical trials with nested subgroups: Analysis, sample size determination and internal pilot studies.

    Science.gov (United States)

    Placzek, Marius; Friede, Tim

    2017-01-01

    The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.

  20. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  1. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  2. Determining sample size and a passing criterion for respirator fit-test panels.

    Science.gov (United States)

    Landsittel, D; Zhuang, Z; Newcomb, W; Berry Ann, R

    2014-01-01

    Few studies have proposed methods for sample size determination and specification of passing criterion (e.g., number needed to pass from a given size panel) for respirator fit-tests. One approach is to account for between- and within- subject variability, and thus take full advantage of the multiple donning measurements within subject, using a random effects model. The corresponding sample size calculation, however, may be difficult to implement in practice, as it depends on the model-specific and test panel-specific variance estimates, and thus does not yield a single sample size or specific cutoff for number needed to pass. A simple binomial approach is therefore proposed to simultaneously determine both the required sample size and the optimal cutoff for the number of subjects needed to achieve a passing result. The method essentially conducts a global search of the type I and type II errors under different null and alternative hypotheses, across the range of possible sample sizes, to find the lowest sample size which yields at least one cutoff satisfying, or approximately satisfying all pre-determined limits for the different error rates. Benchmark testing of 98 respirators (conducted by the National Institute for Occupational Safety and Health) is used to illustrate the binomial approach and show how sample size estimates from the random effects model can vary substantially depending on estimated variance components. For the binomial approach, probability calculations show that a sample size of 35 to 40 yields acceptable error rates under different null and alternative hypotheses. For the random effects model, the required sample sizes are generally smaller, but can vary substantially based on the estimate variance components. Overall, despite some limitations, the binomial approach represents a highly practical approach with reasonable statistical properties.

  3. Sample size and power calculations based on generalized linear mixed models with correlated binary outcomes.

    Science.gov (United States)

    Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R

    2008-08-01

    The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.

  4. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    Science.gov (United States)

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  5. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  6. New shooting algorithms for transition path sampling: centering moves and varied-perturbation sizes for improved sampling.

    Science.gov (United States)

    Rowley, Christopher N; Woo, Tom K

    2009-12-21

    Transition path sampling has been established as a powerful tool for studying the dynamics of rare events. The trajectory generation moves of this Monte Carlo procedure, shooting moves and shifting modes, were developed primarily for rate constant calculations, although this method has been more extensively used to study the dynamics of reactive processes. We have devised and implemented three alternative trajectory generation moves for use with transition path sampling. The centering-shooting move incorporates a shifting move into a shooting move, which centers the transition period in the middle of the trajectory, eliminating the need for shifting moves and generating an ensemble where the transition event consistently occurs near the middle of the trajectory. We have also developed varied-perturbation size shooting moves, wherein smaller perturbations are made if the shooting point is far from the transition event. The trajectories generated using these moves decorrelate significantly faster than with conventional, constant sized perturbations. This results in an increase in the statistical efficiency by a factor of 2.5-5 when compared to the conventional shooting algorithm. On the other hand, the new algorithm breaks detailed balance and introduces a small bias in the transition time distribution. We have developed a modification of this varied-perturbation size shooting algorithm that preserves detailed balance, albeit at the cost of decreased sampling efficiency. Both varied-perturbation size shooting algorithms are found to have improved sampling efficiency when compared to the original constant perturbation size shooting algorithm.

  7. Empirically determining the sample size for large-scale gene network inference algorithms.

    Science.gov (United States)

    Altay, G

    2012-04-01

    The performance of genome-wide gene regulatory network inference algorithms depends on the sample size. It is generally considered that the larger the sample size, the better the gene network inference performance. Nevertheless, there is not adequate information on determining the sample size for optimal performance. In this study, the author systematically demonstrates the effect of sample size on information-theory-based gene network inference algorithms with an ensemble approach. The empirical results showed that the inference performances of the considered algorithms tend to converge after a particular sample size region. As a specific example, the sample size region around ≃64 is sufficient to obtain the most of the inference performance with respect to precision using the representative algorithm C3NET on the synthetic steady-state data sets of Escherichia coli and also time-series data set of a homo sapiens subnetworks. The author verified the convergence result on a large, real data set of E. coli as well. The results give evidence to biologists to better design experiments to infer gene networks. Further, the effect of cutoff on inference performances over various sample sizes is considered. [Includes supplementary material].

  8. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  9. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  10. A behavioural Bayes approach to the determination of sample size for clinical trials considering efficacy and safety: imbalanced sample size in treatment groups.

    Science.gov (United States)

    Kikuchi, Takashi; Gittins, John

    2011-08-01

    The behavioural Bayes approach to sample size determination for clinical trials assumes that the number of subsequent patients switching to a new drug from the current drug depends on the strength of the evidence for efficacy and safety that was observed in the clinical trials. The optimal sample size is the one which maximises the expected net benefit of the trial. The approach has been developed in a series of papers by Pezeshk and the present authors (Gittins JC, Pezeshk H. A behavioral Bayes method for determining the size of a clinical trial. Drug Information Journal 2000; 34: 355-63; Gittins JC, Pezeshk H. How Large should a clinical trial be? The Statistician 2000; 49(2): 177-87; Gittins JC, Pezeshk H. A decision theoretic approach to sample size determination in clinical trials. Journal of Biopharmaceutical Statistics 2002; 12(4): 535-51; Gittins JC, Pezeshk H. A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 2002; 36: 143-50; Kikuchi T, Pezeshk H, Gittins J. A Bayesian cost-benefit approach to the determination of sample size in clinical trials. Statistics in Medicine 2008; 27(1): 68-82; Kikuchi T, Gittins J. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety. Statistics in Medicine 2009; 28(18): 2293-306; Kikuchi T, Gittins J. A Bayesian procedure for cost-benefit evaluation of a new drug in multi-national clinical trials. Statistics in Medicine 2009 (Submitted)). The purpose of this article is to provide a rationale for experimental designs which allocate more patients to the new treatment than to the control group. The model uses a logistic weight function, including an interaction term linking efficacy and safety, which determines the number of patients choosing the new drug, and hence the resulting benefit. A Monte Carlo simulation is employed for the calculation. Having a larger group of patients on the new drug in general

  11. New method to estimate the sample size for calculation of a proportion assuming binomial distribution.

    Science.gov (United States)

    Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio

    2013-10-01

    Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Sample size calculations in clinical research should also be based on ethical principles.

    Science.gov (United States)

    Cesana, Bruno Mario; Antonelli, Paolo

    2016-03-18

    Sample size calculations based on too narrow a width, or with lower and upper confidence limits bounded by fixed cut-off points, not only increase power-based sample sizes to ethically unacceptable levels (thus making research practically unfeasible) but also greatly increase the costs and burdens of clinical trials. We propose an alternative method of combining the power of a statistical test and the probability of obtaining adequate precision (the power of the confidence interval) with an acceptable increase in power-based sample sizes.

  14. An Update on Using the Range to Estimate σ When Determining Sample Sizes.

    Science.gov (United States)

    Rhiel, George Steven; Markowski, Edward

    2017-04-01

    In this research, we develop a strategy for using a range estimator of σ when determining a sample size for estimating a mean. Previous research by Rhiel is extended to provide dn values for use in calculating a range estimate of σ when working with sampling frames up to size 1,000,000. This allows the use of the range estimator of σ with "big data." A strategy is presented for using the range estimator of σ for determining sample sizes based on the dn values developed in this study.

  15. Size-fractionated measurement of coarse black carbon particles in deposition samples

    Science.gov (United States)

    Schultz, E.

    In a 1-year field study, particle deposition flux was measured by transparent collection plates. Particle concentration was simultaneously measured with a cascade impactor. Microscopic evaluation of deposition samples provided the discrimination of translucent (mineral or biological) and black carbon particles, i.e. soot agglomerates, fly-ash cenospheres and rubber fragments in the size range from 3 to 50 μm. The deposition samples were collected in two different sampling devices. A wind- and rain-shielded measurement was achieved in the Sigma-2 device. Dry deposition data from this device were used to calculate mass concentrations of the translucent and the black particle fraction separately, approximating particle deposition velocity by Stokes' settling velocity. In mass calculations an error up to 20% has to be considered due to assumed spherical shape and unit density for all particles. Within the limitations of these assumptions, deposition velocities of the distinguished coarse particles were calculated. The results for total particulate matter in this range are in good agreement with those from impactor measurement. The coarse black carbon fraction shows a reduced deposition velocity in comparison with translucent particles. The deviation depends on precipitation amount. Further measurements and structural investigations of black carbon particles are in preparation to verify these results.

  16. Publishing nutrition research: a review of sampling, sample size, statistical analysis, and other key elements of manuscript preparation, Part 2.

    Science.gov (United States)

    Boushey, Carol J; Harris, Jeffrey; Bruemmer, Barbara; Archer, Sujata L

    2008-04-01

    Members of the Board of Editors recognize the importance of providing a resource for researchers to insure quality and accuracy of reporting in the Journal. This second monograph of a periodic series focuses on study sample selection, sample size, and common statistical procedures using parametric methods, and the presentation of statistical methods and results. Attention to sample selection and sample size is critical to avoid study bias. When outcome variables adhere to a normal distribution, then parametric procedures can be used for statistical inference. Documentation that clearly outlines the steps used in the research process will advance the science of evidence-based practice in nutrition and dietetics. Real examples from problem sets and published literature are provided, as well as reference to books and online resources.

  17. A multi-cyclone sampling array for the collection of size-segregated occupational aerosols.

    Science.gov (United States)

    Mischler, Steven E; Cauda, Emanuele G; Di Giuseppe, Michelangelo; Ortiz, Luis A

    2013-01-01

    In this study a serial multi-cyclone sampling array capable of simultaneously sampling particles of multiple size fractions, from an occupational environment, for use in in vivo and in vitro toxicity studies and physical/chemical characterization, was developed and tested. This method is an improvement over current methods used to size-segregate occupational aerosols for characterization, due to its simplicity and its ability to collect sufficient masses of nano- and ultrafine sized particles for analysis. This method was evaluated in a chamber providing a uniform atmosphere of dust concentrations using crystalline silica particles. The multi-cyclone sampling array was used to segregate crystalline silica particles into four size fractions, from a chamber concentration of 10 mg/m(3). The size distributions of the particles collected at each stage were confirmed, in the air, before and after each cyclone stage. Once collected, the particle size distribution of each size fraction was measured using light scattering techniques to further confirm the size distributions. As a final confirmation, scanning electron microscopy was used to collect images of each size fraction. The results presented here, using multiple measurement techniques, show that this multi-cyclone system was able to successfully collect distinct size-segregated particles at sufficient masses to perform toxicological evaluations and physical/chemical characterization.

  18. Mineralogical, optical, geochemical, and particle size properties of four sediment samples for optical physics research

    Science.gov (United States)

    Bice, K.; Clement, S. C.

    1981-01-01

    X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.

  19. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology.

    Science.gov (United States)

    Brown, Caleb Marshall; Vavrek, Matthew J

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes.

  20. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  1. Estimating effective population size from temporally spaced samples with a novel, efficient maximum-likelihood algorithm.

    Science.gov (United States)

    Hui, Tin-Yu J; Burt, Austin

    2015-05-01

    The effective population size [Formula: see text] is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating [Formula: see text] have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator [Formula: see text] for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying [Formula: see text] is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator [Formula: see text], and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of [Formula: see text] to several million, hence allowing the estimation of larger [Formula: see text]. Finally, we demonstrate how this algorithm can cope with nonconstant [Formula: see text] scenarios and be used as a likelihood-ratio test to test for the equality of [Formula: see text] throughout the sampling horizon. An R package "NB" is now available for download to implement the method described in this article. Copyright © 2015 by the Genetics Society of America.

  2. Sample Size Determination in a Chi-Squared Test Given Information from an Earlier Study.

    Science.gov (United States)

    Gillett, Raphael

    1996-01-01

    A rigorous method is outlined for using information from a previous study and explicitly taking into account the variability of an effect size estimate when determining sample size for a chi-squared test. This approach assures that the average power of all experiments in a discipline attains the desired level. (SLD)

  3. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

    Science.gov (United States)

    Schoeneberger, Jason A.

    2016-01-01

    The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

  4. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    ... in eight plant communities in the Nylsvley Nature Reserve. Illustrates with a table. Keywords: Botanical surveys; Grass density; Grasslands; Mixed Bushveld; Nylsvley Nature Reserve; Quadrat size species density; Small-quadrat method; Species density; Species richness; botany; sample size; method; survey; south africa

  5. Sample size calculations in clinical research should also be based on ethical principles

    OpenAIRE

    Cesana, Bruno Mario; Antonelli, Paolo

    2016-01-01

    Sample size calculations based on too narrow a width, or with lower and upper confidence limits bounded by fixed cut-off points, not only increase power-based sample sizes to ethically unacceptable levels (thus making research practically unfeasible) but also greatly increase the costs and burdens of clinical trials. We propose an alternative method of combining the power of a statistical test and the probability of obtaining adequate precision (the power of the confidence interval) with an a...

  6. OPTIMAL SAMPLE SIZE FOR STATISTICAL ANALYSIS OF WINTER WHEAT QUANTITATIVE TRAITS

    OpenAIRE

    Andrijana Eđed; Dražen Horvat; Zdenko Lončarić

    2009-01-01

    In the planning phase of every research particular attention should be dedicated to estimation of optimal sample size, aiming to obtain more precise and objective results of statistical analysis. The aim of this paper was to estimate optimal sample size of wheat yield components (plant height, spike length, number of spikelets per spike, number of grains per spike, weight of grains per spike and 1000 grains weight) for determination of statistically significant differences between two treatme...

  7. Evaluation of different sized blood sampling tubes for thromboelastometry, platelet function, and platelet count

    DEFF Research Database (Denmark)

    Andreasen, Jo Bønding; Pistor-Riebold, Thea Unger; Knudsen, Ingrid Hell

    2014-01-01

    Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses and compa......Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses...

  8. Sample size for equivalence trials: a case study from a vaccine lot consistency trial.

    Science.gov (United States)

    Ganju, Jitendra; Izu, Allen; Anemona, Alessandra

    2008-08-30

    For some trials, simple but subtle assumptions can have a profound impact on the size of the trial. A case in point is a vaccine lot consistency (or equivalence) trial. Standard sample size formulas used for designing lot consistency trials rely on only one component of variation, namely, the variation in antibody titers within lots. The other component, the variation in the means of titers between lots, is assumed to be equal to zero. In reality, some amount of variation between lots, however small, will be present even under the best manufacturing practices. Using data from a published lot consistency trial, we demonstrate that when the between-lot variation is only 0.5 per cent of the total variation, the increase in the sample size is nearly 300 per cent when compared with the size assuming that the lots are identical. The increase in the sample size is so pronounced that in order to maintain power one is led to consider a less stringent criterion for demonstration of lot consistency. The appropriate sample size formula that is a function of both components of variation is provided. We also discuss the increase in the sample size due to correlated comparisons arising from three pairs of lots as a function of the between-lot variance.

  9. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  10. A margin based approach to determining sample sizes via tolerance bounds.

    Energy Technology Data Exchange (ETDEWEB)

    Newcomer, Justin T.; Freeland, Katherine Elizabeth

    2013-09-01

    This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

  11. Sample size calculation for differential expression analysis of RNA-seq data under Poisson distribution.

    Science.gov (United States)

    Li, Chung-I; Su, Pei-Fang; Guo, Yan; Shyr, Yu

    2013-01-01

    Sample size determination is an important issue in the experimental design of biomedical research. Because of the complexity of RNA-seq experiments, however, the field currently lacks a sample size method widely applicable to differential expression studies utilising RNA-seq technology. In this report, we propose several methods for sample size calculation for single-gene differential expression analysis of RNA-seq data under Poisson distribution. These methods are then extended to multiple genes, with consideration for addressing the multiple testing problem by controlling false discovery rate. Moreover, most of the proposed methods allow for closed-form sample size formulas with specification of the desired minimum fold change and minimum average read count, and thus are not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size formulas are presented; the results indicate that our methods work well, with achievement of desired power. Finally, our sample size calculation methods are applied to three real RNA-seq data sets.

  12. Small portion sizes in worksite cafeterias: do they help consumers to reduce their food intake?

    NARCIS (Netherlands)

    Vermeer, W.M.; Steenhuis, I.H.M.; Leeuwis, F.H.; Heijmans, M.W.; Seidell, J.C.

    2011-01-01

    Background:Environmental interventions directed at portion size might help consumers to reduce their food intake.Objective:To assess whether offering a smaller hot meal, in addition to the existing size, stimulates people to replace their large meal with a smaller meal.Design:Longitudinal randomized

  13. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    Science.gov (United States)

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Optimal sample sizes for Welch's test under various allocation and cost considerations.

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2011-12-01

    The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350-362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  15. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    Science.gov (United States)

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  16. SMALL SAMPLE SIZE IN 2X2 CROSS OVER DESIGNS: CONDITIONS OF DETERMINATION

    Directory of Open Access Journals (Sweden)

    B SOLEYMANI

    2001-09-01

    Full Text Available Introduction. Determination of small sample size in some clinical trials is a matter of importance. In cross-over studies which are one types of clinical trials, the matter is more significant. In this article, the conditions in which determination of small sample size in cross-over studies are possible were considered, and the effect of deviation from normality on the matter has been shown. Methods. The present study has been done on such 2x2 cross-over studies that variable of interest is quantitative one and is measurable by ratio or interval scale. The method of consideration is based on use of variable and sample mean"s distributions, central limit theorem, method of sample size determination in two groups, and cumulant or moment generating function. Results. In normal variables or transferable to normal variables, there is no restricting factors other than significant level and power of the test for determination of sample size, but in the case of non-normal variables, it should be determined such large that guarantee the normality of sample mean"s distribution. Discussion. In such cross over studies that because of existence of theoretical base, few samples can be computed, one should not do it without taking applied worth of results into consideration. While determining sample size, in addition to variance, it is necessary to consider distribution of variable, particularly through its skewness and kurtosis coefficients. the more deviation from normality, the more need of samples. Since in medical studies most of the continuous variables are closed to normal distribution, a few number of samples often seems to be adequate for convergence of sample mean to normal distribution.

  17. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  18. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  19. n4Studies: Sample Size Calculation for an Epidemiological Study on a Smart Device

    Directory of Open Access Journals (Sweden)

    Chetta Ngamjarus

    2016-05-01

    Full Text Available Objective: This study was to develop a sample size application (called “n4Studies” for free use on iPhone and Android devices and to compare sample size functions between n4Studies with other applications and software. Methods: Objective-C programming language was used to create the application for the iPhone OS (operating system while javaScript, jquery mobile, PhoneGap and jstat were used to develop it for Android phones. Other sample size applications were searched from the Apple app and Google play stores. The applications’ characteristics and sample size functions were collected. Spearman’s rank correlation was used to investigate the relationship between number of sample size functions and price. Results: “n4Studies” provides several functions for sample size and power calculations for various epidemiological study designs. It can be downloaded from the Apple App and Google play store. Comparing n4Studies with other applications, it covers several more types of epidemiological study designs, gives similar results for estimation of infinite/finite population mean and infinite/finite proportion from GRANMO, for comparing two independent means from BioStats, for comparing two independent proportions from EpiCal application. When using the same parameters, n4Studies gives similar results to STATA, epicalc package in R, PS, G*Power, and OpenEpi. Conclusion: “n4Studies” can be an alternative tool for calculating the sample size. It may be useful to students, lecturers and researchers in conducting their research projects.

  20. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  1. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  2. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

    Science.gov (United States)

    Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

    2013-08-01

    Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.

  3. Sample size estimates for determining treatment effects in high-risk patients with early relapsing-remitting multiple sclerosis.

    Science.gov (United States)

    Scott, Thomas F; Schramke, Carol J; Cutter, Gary

    2003-06-01

    Risk factors for short-term progression in early relapsing remitting MS have been identified recently. Previously we determined potential risk factors for rapid progression of early relapsing remitting MS and identified three groups of high-risk patients. These non-mutually exclusive groups of patients were drawn from a consecutively studied sample of 98 patients with newly diagnosed MS. High-risk patients had a history of either poor recovery from initial attacks, more than two attacks in the first two years of disease, or a combination of at least four other risk factors. To determine differences in sample sizes required to show a meaningful treatment effect when using a high-risk sample versus a random sample of patients. Power analyses were used to calculate the different sample sizes needed for hypothetical treatment trials. We found that substantially smaller numbers of patients should be needed to show a significant treatment effect by employing these high-risk groups of patients as compared to a random population of MS patients (e.g., 58% reduction in sample size in one model). The use of patients at higher risk of progression to perform drug treatment trials can be considered as a means to reduce the number of patients needed to show a significant treatment effect for patients with very early MS.

  4. [Explanation of samples sizes in current biomedical journals: an irrational requirement].

    Science.gov (United States)

    Silva Ayçaguer, Luis Carlos; Alonso Galbán, Patricia

    2013-01-01

    To discuss the theoretical relevance of current requirements for explanations of the sample sizes employed in published studies, and to assess the extent to which these requirements are currently met by authors and demanded by referees and editors. A literature review was conducted to gain insight into and critically discuss the possible rationale underlying the requirement of justifying sample sizes. A descriptive bibliometric study was then carried out based on the original studies published in the six journals with the highest impact factor in the field of health in 2009. All the arguments used to support the requirement of an explanation of sample sizes are feeble, and there are several reasons why they should not be endorsed. These instructions are neglected in most of the studies published in the current literature with the highest impact factor. In 56% (95%CI: 52-59) of the articles, the sample size used was not substantiated, and only 27% (95%CI: 23-30) met all the requirements contained in the guidelines adhered to by the journals studied. Based on this study, we conclude that there are no convincing arguments justifying the requirement for an explanation of how the sample size was reached in published articles. There is no sound basis for this requirement, which not only does not promote the transparency of research reports but rather contributes to undermining it. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.

  5. Sample Size for Assessing Agreement between Two Methods of Measurement by Bland-Altman Method.

    Science.gov (United States)

    Lu, Meng-Jie; Zhong, Wei-Hua; Liu, Yu-Xiu; Miao, Hua-Zhang; Li, Yong-Chang; Ji, Mu-Huo

    2016-11-01

    The Bland-Altman method has been widely used for assessing agreement between two methods of measurement. However, it remains unsolved about sample size estimation. We propose a new method of sample size estimation for Bland-Altman agreement assessment. According to the Bland-Altman method, the conclusion on agreement is made based on the width of the confidence interval for LOAs (limits of agreement) in comparison to predefined clinical agreement limit. Under the theory of statistical inference, the formulae of sample size estimation are derived, which depended on the pre-determined level of α, β, the mean and the standard deviation of differences between two measurements, and the predefined limits. With this new method, the sample sizes are calculated under different parameter settings which occur frequently in method comparison studies, and Monte-Carlo simulation is used to obtain the corresponding powers. The results of Monte-Carlo simulation showed that the achieved powers could coincide with the pre-determined level of powers, thus validating the correctness of the method. The method of sample size estimation can be applied in the Bland-Altman method to assess agreement between two methods of measurement.

  6. Exploratory factor analysis with small sample sizes: a comparison of three approaches.

    Science.gov (United States)

    Jung, Sunho

    2013-07-01

    Exploratory factor analysis (EFA) has emerged in the field of animal behavior as a useful tool for determining and assessing latent behavioral constructs. Because the small sample size problem often occurs in this field, a traditional approach, unweighted least squares, has been considered the most feasible choice for EFA. Two new approaches were recently introduced in the statistical literature as viable alternatives to EFA when sample size is small: regularized exploratory factor analysis and generalized exploratory factor analysis. A simulation study is conducted to evaluate the relative performance of these three approaches in terms of factor recovery under various experimental conditions of sample size, degree of overdetermination, and level of communality. In this study, overdetermination and sample size are the meaningful conditions in differentiating the performance of the three approaches in factor recovery. Specifically, when there are a relatively large number of factors, regularized exploratory factor analysis tends to recover the correct factor structure better than the other two approaches. Conversely, when few factors are retained, unweighted least squares tends to recover the factor structure better. Finally, generalized exploratory factor analysis exhibits very poor performance in factor recovery compared to the other approaches. This tendency is particularly prominent as sample size increases. Thus, generalized exploratory factor analysis may not be a good alternative to EFA. Regularized exploratory factor analysis is recommended over unweighted least squares unless small expected number of factors is ensured. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    Science.gov (United States)

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  8. Reduced body size and cub recruitment in polar bears associated with sea ice decline

    Science.gov (United States)

    Rode, K.D.; Amstrup, Steven C.; Regehr, E.V.

    2010-01-01

    Rates of reproduction and survival are dependent upon adequate body size and condition of individuals. Declines in size and condition have provided early indicators of population decline in polar bears (Ursus maritimus) near the southern extreme of their range. We tested whether patterns in body size, condition, and cub recruitment of polar bears in the southern Beaufort Sea of Alaska were related to the availability of preferred sea ice habitats and whether these measures and habitat availability exhibited trends over time, between 1982 and 2006. The mean skull size and body length of all polar bears over three years of age declined over time, corresponding with long-term declines in the spatial and temporal availability of sea ice habitat. Body size of young, growing bears declined over time and was smaller after years when sea ice availability was reduced. Reduced litter mass and numbers of yearlings per female following years with lower availability of optimal sea ice habitat, suggest reduced reproductive output and juvenile survival. These results, based on analysis of a longterm data set, suggest that declining sea ice is associated with nutritional limitations that reduced body size and reproduction in this population. ?? 2010 by the Ecological Society of America.

  9. Reduced body size and cub recruitment in polar bears associated with sea ice decline.

    Science.gov (United States)

    Rode, Karyn D; Amstrup, Steven C; Regehr, Eric V

    2010-04-01

    Rates of reproduction and survival are dependent upon adequate body size and condition of individuals. Declines in size and condition have provided early indicators of population decline in polar bears (Ursus maritimus) near the southern extreme of their range. We tested whether patterns in body size, condition, and cub recruitment of polar bears in the southern Beaufort Sea of Alaska were related to the availability of preferred sea ice habitats and whether these measures and habitat availability exhibited trends over time, between 1982 and 2006. The mean skull size and body length of all polar bears over three years of age declined over time, corresponding with long-term declines in the spatial and temporal availability of sea ice habitat. Body size of young, growing bears declined over time and was smaller after years when sea ice availability was reduced. Reduced litter mass and numbers of yearlings per female following years with lower availability of optimal sea ice habitat, suggest reduced reproductive output and juvenile survival. These results, based on analysis of a long-term data set, suggest that declining sea ice is associated with nutritional limitations that reduced body size and reproduction in this population.

  10. Monte Carlo approaches for determining power and sample size in low-prevalence applications.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D; Wagner, Bruce A

    2007-11-15

    The prevalence of disease in many populations is often low. For example, the prevalence of tuberculosis, brucellosis, and bovine spongiform encephalopathy range from 1 per 100,000 to less than 1 per 1,000,000 in many countries. When an outbreak occurs, epidemiological investigations often require comparing the prevalence in an exposed population with that of an unexposed population. To determine if the level of disease in the two populations is significantly different, the epidemiologist must consider the test to be used, desired power of the test, and determine the appropriate sample size for both the exposed and unexposed populations. Commonly available software packages provide estimates of the required sample sizes for this application. This study shows that these estimated sample sizes can exceed the necessary number of samples by more than 35% when the prevalence is low. We provide a Monte Carlo-based solution and show that in low-prevalence applications this approach can lead to reductions in the total samples size of more than 10,000 samples.

  11. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  12. [On the impact of sample size calculation and power in clinical research].

    Science.gov (United States)

    Held, Ulrike

    2014-10-01

    The aim of a clinical trial is to judge the efficacy of a new therapy or drug. In the planning phase of the study, the calculation of the necessary sample size is crucial in order to obtain a meaningful result. The study design, the expected treatment effect in outcome and its variability, power and level of significance are factors which determine the sample size. It is often difficult to fix these parameters prior to the start of the study, but related papers from the literature can be helpful sources for the unknown quantities. For scientific as well as ethical reasons it is necessary to calculate the sample size in advance in order to be able to answer the study question.

  13. Species-genetic diversity correlations in habitat fragmentation can be biased by small sample sizes.

    Science.gov (United States)

    Nazareno, Alison G; Jump, Alistair S

    2012-06-01

    Predicted parallel impacts of habitat fragmentation on genes and species lie at the core of conservation biology, yet tests of this rule are rare. In a recent article in Ecology Letters, Struebig et al. (2011) report that declining genetic diversity accompanies declining species diversity in tropical forest fragments. However, this study estimates diversity in many populations through extrapolation from very small sample sizes. Using the data of this recent work, we show that results estimated from the smallest sample sizes drive the species-genetic diversity correlation (SGDC), owing to a false-positive association between habitat fragmentation and loss of genetic diversity. Small sample sizes are a persistent problem in habitat fragmentation studies, the results of which often do not fit simple theoretical models. It is essential, therefore, that data assessing the proposed SGDC are sufficient in order that conclusions be robust.

  14. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  15. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    Science.gov (United States)

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  16. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  17. The effect of noise and sampling size on vorticity measurements in rotating fluids

    Science.gov (United States)

    Wong, Kelvin K. L.; Kelso, Richard M.; Mazumdar, Jagannath; Abbott, Derek

    2008-11-01

    This paper describes a new technique for presenting information based on given flow images. Using a multistep first order differentiation technique, we are able to map in two dimensions, vorticity of fluid within a region of investigation. We can then present the distribution of this property in space by means of a color intensity map. In particular, the state of fluid rotation can be displayed using maps of vorticity flow values. The framework that is implemented can also be used to quantify the vortices using statistical properties which can be derived from such vorticity flow maps. To test our methodology, we have devised artificial vortical flow fields using an analytical formulation of a single vortex. Reliability of vorticity measurement from our results shows that the size of flow vector sampling and noise in flow field affect the generation of vorticity maps. Based on histograms of these maps, we are able to establish an optimised configuration that computes vorticity fields to approximate the ideal vortex statistically. The novel concept outlined in this study can be used to reduce fluctuations of noise in a vorticity calculation based on imperfect flow information without excessive loss of its features, and thereby improves the effectiveness of flow

  18. Optimal sample size determinations from an industry perspective based on the expected value of information.

    Science.gov (United States)

    Willan, Andrew R

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as type I and II errors. As an alternative, taking a societal perspective, and using the expected value of information based on Bayesian decision theory, a number of authors have recently shown how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of the trial and the value of the information gained from the results. Other authors have proposed Bayesian methods to determine sample sizes from an industry perspective. The purpose of this article is to propose a Bayesian approach to sample size calculations from an industry perspective that attempts to determine the sample size that maximizes expected profit. A model is proposed for expected total profit that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, discount rate, and the relationship between the results and the probability of regulatory approval. The expected value of information provided by trial data is related to the increase in expected profit from increasing the probability of regulatory approval. The methods are applied to an example, including an examination of robustness. The model is extended to consider market share as a function of observed treatment effect. The use of methods based on the expected value of information can provide, from an industry perspective, robust sample size solutions that maximize the difference between the expected cost of the trial and the expected value of information gained from the results. The method is only as good as the model for expected total profit. Although the model probably has all the right elements, it assumes that market share, per-patient profit, and incidence are insensitive to trial results. The method relies on the central limit theorem which assumes that the sample sizes involved ensure that the relevant test statistics

  19. Max control chart with adaptive sample sizes for jointly monitoring process mean and standard deviation

    OpenAIRE

    Ching Chun Huang

    2014-01-01

    This paper develops the two-state and three-state adaptive sample size control schemes based on the Max chart to simultaneously monitor the process mean and standard deviation. Since the Max chart is a single variables control chart where only one plotting statistic is needed, the design and operation of adaptive sample size schemes for this chart will be simpler than those for the joint [Xmacr ] and S charts. Three types of processes including on-target initial, off-target initial and steady...

  20. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  1. Collecting a better water-quality sample: Reducing vertical stratification bias in open and closed channels

    Science.gov (United States)

    Selbig, William R.

    2017-01-01

    Collection of water-quality samples that accurately characterize average particle concentrations and distributions in channels can be complicated by large sources of variability. The U.S. Geological Survey (USGS) developed a fully automated Depth-Integrated Sample Arm (DISA) as a way to reduce bias and improve accuracy in water-quality concentration data. The DISA was designed to integrate with existing autosampler configurations commonly used for the collection of water-quality samples in vertical profile thereby providing a better representation of average suspended sediment and sediment-associated pollutant concentrations and distributions than traditional fixed-point samplers. In controlled laboratory experiments, known concentrations of suspended sediment ranging from 596 to 1,189 mg/L were injected into a 3 foot diameter closed channel (circular pipe) with regulated flows ranging from 1.4 to 27.8 ft3 /s. Median suspended sediment concentrations in water-quality samples collected using the DISA were within 7 percent of the known, injected value compared to 96 percent for traditional fixed-point samplers. Field evaluation of this technology in open channel fluvial systems showed median differences between paired DISA and fixed-point samples to be within 3 percent. The range of particle size measured in the open channel was generally that of clay and silt. Differences between the concentration and distribution measured between the two sampler configurations could potentially be much larger in open channels that transport larger particles, such as sand.

  2. A simulation-based sample size calculation method for pre-clinical tumor xenograft experiments.

    Science.gov (United States)

    Wu, Jianrong; Yang, Shengping

    2017-04-07

    Pre-clinical tumor xenograft experiments usually require a small sample size that is rarely greater than 20, and data generated from such experiments very often do not have censored observations. Many statistical tests can be used for analyzing such data, but most of them were developed based on large sample approximation. We demonstrate that the type-I error rates of these tests can substantially deviate from the designated rate, especially when the data to be analyzed has a skewed distribution. Consequently, the sample size calculated based on these tests can be erroneous. We propose a modified signed log-likelihood ratio test (MSLRT) to meet the type-I error rate requirement for analyzing pre-clinical tumor xenograft data. The MSLRT has a consistent and symmetric type-I error rate that is very close to the designated rate for a wide range of sample sizes. By simulation, we generated a series of sample size tables based on scenarios commonly expected in tumor xenograft experiments, and we expect that these tables can be used as guidelines for making decisions on the numbers of mice used in tumor xenograft experiments.

  3. Reducing size-dispersion in one-pot aqueous synthesis of maghemite nanoparticles.

    Science.gov (United States)

    Drummond, A L; Feitoza, N C; Duarte, G C; Sales, M J A; Silva, L P; Chaker, J A; Bakuzis, A F; Sousa, M H

    2012-10-01

    Nanosized maghemite-like particles with reduced size-distribution were obtained using a one-pot synthesis route in aqueous medium. Forced hydrolysis of iron ions in ammoniac solution led to the formation of magnetite nanoparticles that were oxidized to maghemite in a hydrothermal digestion step that reduced the polydispersity of nanograins. The prepared nanoparticles were characterized by chemical analysis, X-ray diffractometry, magnetization, Raman spectroscopy and transmission electron microscopy measurements. Data showed that 14 nm-sized particles with polydispersity of about 0.14 were produced and, differently from other procedures, neither additional steps nor toxic reagents were needed to reduce size-dispersion or to oxidize magnetite to maghemite. These facts per se turn such nanodevice into a good potential choice for biomedical applications.

  4. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  5. Sample size bounding and context ranking as approaches to the HRA data problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, Bernhard

    2004-02-01

    This paper presents a technique denoted as sub sample size bounding (SSSB) useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications for human reliability analysis (HRA) are emphasized in the presentation of the technique. Exemplified by a sample of 180 abnormal event sequences, it is outlined how SSSB can provide viable input for the quantification of errors of commission (EOCs)

  6. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  7. SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA

    OpenAIRE

    Faghihzadeh, S.; M. Rahgozar

    2003-01-01

    Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the resul...

  8. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  9. Sample size calculations for clinical trials targeting tauopathies: A new potential disease target

    Science.gov (United States)

    Whitwell, Jennifer L.; Duffy, Joseph R.; Strand, Edythe A.; Machulda, Mary M.; Tosakulwong, Nirubol; Weigand, Stephen D.; Senjem, Matthew L.; Spychalla, Anthony J.; Gunter, Jeffrey L.; Petersen, Ronald C.; Jack, Clifford R.; Josephs, Keith A.

    2015-01-01

    Disease-modifying therapies are being developed to target tau pathology, and should, therefore, be tested in primary tauopathies. We propose that progressive apraxia of speech should be considered one such target group. In this study, we investigate potential neuroimaging and clinical outcome measures for progressive apraxia of speech and determine sample size estimates for clinical trials. We prospectively recruited 24 patients with progressive apraxia of speech who underwent two serial MRI with an interval of approximately two years. Detailed speech and language assessments included the Apraxia of Speech Rating Scale (ASRS) and Motor Speech Disorders (MSD) severity scale. Rates of ventricular expansion and rates of whole brain, striatal and midbrain atrophy were calculated. Atrophy rates across 38 cortical regions were also calculated and the regions that best differentiated patients from controls were selected. Sample size estimates required to power placebo-controlled treatment trials were calculated. The smallest sample size estimates were obtained with rates of atrophy of the precentral gyrus and supplementary motor area, with both measures requiring less than 50 subjects per arm to detect a 25% treatment effect with 80% power. These measures outperformed the other regional and global MRI measures and the clinical scales. Regional rates of cortical atrophy therefore provide the best outcome measures in progressive apraxia of speech. The small sample size estimates demonstrate feasibility for including progressive apraxia of speech in future clinical treatment trials targeting tau. PMID:26076744

  10. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  11. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  12. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  13. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  14. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    Science.gov (United States)

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  15. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    Science.gov (United States)

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  16. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    Science.gov (United States)

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  17. Precise confidence intervals of regression-based reference limits: Method comparisons and sample size requirements.

    Science.gov (United States)

    Shieh, Gwowen

    2017-12-01

    Covariate-dependent reference limits have been extensively applied in biology and medicine for determining the substantial magnitude and relative importance of quantitative measurements. Confidence interval and sample size procedures are available for studying regression-based reference limits. However, the existing popular methods employ different technical simplifications and are applicable only in certain limited situations. This paper describes exact confidence intervals of regression-based reference limits and compares the exact approach with the approximate methods under a wide range of model configurations. Using the ratio between the widths of confidence interval and reference interval as the relative precision index, optimal sample size procedures are presented for precise interval estimation under expected ratio and tolerance probability considerations. Simulation results show that the approximate interval methods using normal distribution have inaccurate confidence limits. The exact confidence intervals dominate the approximate procedures in one- and two-sided coverage performance. Unlike the current simplifications, the proposed sample size procedures integrate all key factors including covariate features in the optimization process and are suitable for various regression-based reference limit studies with potentially diverse configurations. The exact interval estimation has theoretical and practical advantages over the approximate methods. The corresponding sample size procedures and computing algorithms are also presented to facilitate the data analysis and research design of regression-based reference limits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  19. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  20. Influence of tree spatial pattern and sample plot type and size on inventory

    Science.gov (United States)

    John-Pascall Berrill; Kevin L. O' Hara

    2012-01-01

    Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

  1. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  2. Estimating the Size of a Large Network and its Communities from a Random Sample.

    Science.gov (United States)

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  3. Efficacy of a 2% climbazole shampoo for reducing Malassezia population sizes on the skin of naturally infected dogs.

    Science.gov (United States)

    Cavana, P; Petit, J-Y; Perrot, S; Guechi, R; Marignac, G; Reynaud, K; Guillot, J

    2015-12-01

    Shampoo therapy is often recommended for the control of Malassezia overgrowth in dogs. The aim of this study was to evaluate the in vivo activity of a 2% climbazole shampoo against Malassezia pachydermatis yeasts in naturally infected dogs. Eleven research colony Beagles were used. The dogs were distributed randomly into two groups: group A (n=6) and group B (n=5). Group A dogs were washed with a 2% climbazole shampoo, while group B dogs were treated with a physiological shampoo base. The shampoos were applied once weekly for two weeks. The population size of Malassezia yeasts on skin was determined by fungal culture through modified Dixon's medium contact plates pressed on left concave pinna, axillae, groins, perianal area before and after shampoo application. Samples collected were compared by Wilcoxon rank sum test. Samples collected after 2% climbazole shampoo application showed a significant and rapid reduction of Malassezia population sizes. One hour after the first climbazole shampoo application, Malassezia reduction was already statistically significant and 15 days after the second climbazole shampoo, Malassezia population sizes were still significantly decreased. No significant reduction of Malassezia population sizes was observed in group B dogs. The application of a 2% climbazole shampoo significantly reduced Malassezia population sizes on the skin of naturally infected dogs. Application of 2% climbazole shampoo may be useful for the control of Malassezia overgrowth and it may be also proposed as prevention when recurrences are frequent. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  4. Percolating macropore networks in tilled topsoil: effects of sample size, minimum pore thickness and soil type

    Science.gov (United States)

    Jarvis, Nicholas; Larsbo, Mats; Koestel, John; Keck, Hannes

    2017-04-01

    The long-range connectivity of macropore networks may exert a strong control on near-saturated and saturated hydraulic conductivity and the occurrence of preferential flow through soil. It has been suggested that percolation concepts may provide a suitable theoretical framework to characterize and quantify macropore connectivity, although this idea has not yet been thoroughly investigated. We tested the applicability of percolation concepts to describe macropore networks quantified by X-ray scanning at a resolution of 0.24 mm in eighteen cylinders (20 cm diameter and height) sampled from the ploughed layer of four soils of contrasting texture in east-central Sweden. The analyses were performed for sample sizes ("regions of interest", ROI) varying between 3 and 12 cm in cube side-length and for minimum pore thicknesses ranging between image resolution and 1 mm. Finite sample size effects were clearly found for ROI's of cube side-length smaller than ca. 6 cm. For larger sample sizes, the results showed the relevance of percolation concepts to soil macropore networks, with a close relationship found between imaged porosity and the fraction of the pore space which percolated (i.e. was connected from top to bottom of the ROI). The percolating fraction increased rapidly as a function of porosity above a small percolation threshold (1-4%). This reflects the ordered nature of the pore networks. The percolation relationships were similar for all four soils. Although pores larger than 1 mm appeared to be somewhat better connected, only small effects of minimum pore thickness were noted across the range of tested pore sizes. The utility of percolation concepts to describe the connectivity of more anisotropic macropore networks (e.g. in subsoil horizons) should also be tested, although with current X-ray scanning equipment it may prove difficult in many cases to analyze sufficiently large samples that would avoid finite size effects.

  5. The Harmonic Minor Scale Provides an Optimum Way of Reducing Average Melodic Interval Size, Consistent with Sad Affect Cues

    Directory of Open Access Journals (Sweden)

    David Huron

    2013-08-01

    Full Text Available Small pitch movement is known to characterize sadness in speech prosody. Small melodic interval sizes have also been observed in nominally sad music—at least in the case of Western music. Starting with melodies in the major mode, a study is reported which examines the effect of different scale modifications on the average interval size. Compared with all other possible scale modifications, lowering the third and sixth scale tones from the major scale is shown to provide an optimum or near optimum way of reducing the average melodic interval size for a large diverse sample of major-mode melodies. The results are consistent with the view that Western melodic organization and the major-minor polarity are co-adapted, and that the structure of the minor mode contributes to the evoking, expressing or representation of sadness for listeners enculturated to the major scale.

  6. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  7. Implantation of cocoa butter reduces egg and hatchling size in Salmo trutta

    NARCIS (Netherlands)

    Hoogenboom, M. O.; Armstrong, J. D.; Miles, M. S.; Burton, T.; Groothuis, T. G. G.; Metcalfe, N. B.

    This study demonstrated that, irrespective of hormone type or dose, administering cocoa butter implants during egg development affected the growth of female brown trout Salmo trutta and reduced the size of their offspring. Cortisol treatment also increased adult mortality. Caution is urged in the

  8. Performance of a reciprocal shaker in mechanical dispersion of soil samples for particle-size analysis

    Directory of Open Access Journals (Sweden)

    Thayse Aparecida Dourado

    2012-08-01

    Full Text Available The dispersion of the samples in soil particle-size analysis is a fundamental step, which is commonly achieved with a combination of chemical agents and mechanical agitation. The purpose of this study was to evaluate the efficiency of a low-speed reciprocal shaker for the mechanical dispersion of soil samples of different textural classes. The particle size of 61 soil samples was analyzed in four replications, using the pipette method to determine the clay fraction and sieving to determine coarse, fine and total sand fractions. The silt content was obtained by difference. To evaluate the performance, the results of the reciprocal shaker (RSh were compared with data of the same soil samples available in reports of the Proficiency testing for Soil Analysis Laboratories of the Agronomic Institute of Campinas (Prolab/IAC. The accuracy was analyzed based on the maximum and minimum values defining the confidence intervals for the particle-size fractions of each soil sample. Graphical indicators were also used for data comparison, based on dispersion and linear adjustment. The descriptive statistics indicated predominantly low variability in more than 90 % of the results for sand, medium-textured and clay samples, and for 68 % of the results for heavy clay samples, indicating satisfactory repeatability of measurements with the RSh. Medium variability was frequently associated with silt, followed by the fine sand fraction. The sensitivity analyses indicated an accuracy of 100 % for the three main separates (total sand, silt and clay, in all 52 samples of the textural classes heavy clay, clay and medium. For the nine sand soil samples, the average accuracy was 85.2 %; highest deviations were observed for the silt fraction. In relation to the linear adjustments, the correlation coefficients of 0.93 (silt or > 0.93 (total sand and clay, as well as the differences between the angular coefficients and the unit < 0.16, indicated a high correlation between the

  9. Reduced particle size wheat bran is butyrogenic and lowers Salmonella colonization, when added to poultry feed.

    Science.gov (United States)

    Vermeulen, K; Verspreet, J; Courtin, C M; Haesebrouck, F; Ducatelle, R; Van Immerseel, F

    2017-01-01

    Feed additives, including prebiotics, are commonly used alternatives to antimicrobial growth promoters to improve gut health and performance in broilers. Wheat bran is a highly concentrated source of (in)soluble fiber which is partly degraded by the gut microbiota. The aim of the present study was to investigate the potential of wheat bran as such to reduce colonization of the cecum and shedding of Salmonella bacteria in vivo. Also, the effect of particle size was evaluated. Bran with an average reduced particle size of 280μm decreased levels of cecal Salmonella colonization and shedding shortly after infection when compared to control groups and groups receiving bran with larger particle sizes. In vitro fermentation experiments revealed that bran with smaller particle size was fermented more efficiently, with a significantly higher production of butyric and propionic acid, compared to the control fermentation and fermentation of a larger fraction. Fermentation products derived from bran with an average particle size of 280μm downregulated the expression of hilA, an important invasion-related gene of Salmonella. This downregulation was reflected in an actual lowered invasive potential when Salmonella bacteria were pretreated with the fermentation products derived from the smaller bran fraction. These data suggest that wheat bran with reduced particle size can be a suitable feed additive to help control Salmonella infections in broilers. The mechanism of action most probably relies on a more efficient fermentation of this bran fraction and the consequent increased production of short chain fatty acids (SCFA). Among these SCFA, butyric and propionic acid are known to reduce the invasion potential of Salmonella bacteria. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. B-Graph Sampling to Estimate the Size of a Hidden Population

    Directory of Open Access Journals (Sweden)

    Spreen Marinus

    2015-12-01

    Full Text Available Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is respondent-driven sampling in which no sampling frame is used. However, in some studies multiple but incomplete sampling frames are available. In this article, we introduce the B-graph design that can be used in such situations. In this design, all available incomplete sampling frames are joined and turned into one sampling frame, from which a random sample is drawn and selected respondents are asked to mention their contacts. By considering the population as a bipartite graph of a two-mode network (those from the sampling frame and those who are not on the frame, the number of respondents who are directly linked to the sampling frame members can be estimated using Chao’s and Zelterman’s estimators for sparse data. The B-graph sampling design is illustrated using the data of a social network study from Utrecht, the Netherlands.

  11. Planosol soil sample size for computerized tomography measurement of physical parameters

    Directory of Open Access Journals (Sweden)

    Pedrotti Alceu

    2003-01-01

    Full Text Available Computerized tomography (CT is an important tool in Soil Science for noninvasive measurement of density and water content of soil samples. This work aims to describe the aspects of sample size adequacy for Planosol (Albaqualf and to evaluate procedures for statistical analysis, using a CT scanner with a 241Am source. Density errors attributed to the equipment are 0.051 and 0.046 Mg m-3 for horizons A and B, respectively. The theoretical value for sample thickness for the Planosol, using this equipment, is 4.0 cm for the horizons A and B. The ideal thickness of samples is approximately 6.0 cm, being smaller for samples of the horizon B in relation to A. Alternatives for the improvement of the efficiency analysis and the reliability of the results obtained by CT are also discussed, and indicate good precision and adaptability of the application of this technology in Planosol (Albaqualf studies.

  12. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  13. Sample size calculation for microarray experiments with blocked one-way design

    Directory of Open Access Journals (Sweden)

    Jung Sin-Ho

    2009-05-01

    Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.

  14. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    Science.gov (United States)

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.

    Science.gov (United States)

    Power, Stephanie M; Matic, Damir B

    2013-03-01

    Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P  =  .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.

  16. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  17. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    Science.gov (United States)

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  18. The role of the upper sample size limit in two-stage bioequivalence designs.

    Science.gov (United States)

    Karalis, Vangelis

    2013-11-01

    Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Sample size allocation for food item radiation monitoring and safety inspection.

    Science.gov (United States)

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  20. A simple method for estimating genetic diversity in large populations from finite sample sizes

    Directory of Open Access Journals (Sweden)

    Rajora Om P

    2009-12-01

    Full Text Available Abstract Background Sample size is one of the critical factors affecting the accuracy of the estimation of population genetic diversity parameters. Small sample sizes often lead to significant errors in determining the allelic richness, which is one of the most important and commonly used estimators of genetic diversity in populations. Correct estimation of allelic richness in natural populations is challenging since they often do not conform to model assumptions. Here, we introduce a simple and robust approach to estimate the genetic diversity in large natural populations based on the empirical data for finite sample sizes. Results We developed a non-linear regression model to infer genetic diversity estimates in large natural populations from finite sample sizes. The allelic richness values predicted by our model were in good agreement with those observed in the simulated data sets and the true allelic richness observed in the source populations. The model has been validated using simulated population genetic data sets with different evolutionary scenarios implied in the simulated populations, as well as large microsatellite and allozyme experimental data sets for four conifer species with contrasting patterns of inherent genetic diversity and mating systems. Our model was a better predictor for allelic richness in natural populations than the widely-used Ewens sampling formula, coalescent approach, and rarefaction algorithm. Conclusions Our regression model was capable of accurately estimating allelic richness in natural populations regardless of the species and marker system. This regression modeling approach is free from assumptions and can be widely used for population genetic and conservation applications.

  1. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    DEFF Research Database (Denmark)

    Haugbøl, Steven; Pinborg, Lars H; Arfan, Haroon M

    2006-01-01

    PURPOSE: To determine the reproducibility of measurements of brain 5-HT2A receptors with an [18F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects...... reproducibility and the required sample size. METHODS: For assessment of the variability, six subjects were investigated with [18F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [18F]altanserin PET studies in 84 healthy subjects......% (range 5-12%), whereas in regions with a low receptor density, BP1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high...

  2. Three-Dimensional Culture Reduces Cell Size By Increasing Vesicle Excretion.

    Science.gov (United States)

    Mo, Miaohua; Zhou, Ying; Li, Sen; Wu, Yaojiong

    2018-02-01

    Our previous study has shown that three-dimensional (3D) culture decreases mesenchymal stem cell (MSC) size, leading to enhanced trafficking ability and reduced lung vascular obstructions. However, the underlying mechanisms are unclear. In this study, we proposed that 3D culture reduces MSC size by increasing vesicle excretion. Scanning electron microscope showed that 3D culture markedly increased the amount of membrane-bound vesicles on the cell surface. In consistence, tunable resistive pulse sensing quantifying analysis of vesicles in the culture medium indicated that there were higher levels of vesicles in the 3D culture MSC medium. 3D culture significantly lowered the level of actin polymerization (F-actin), suggestive of lowering actin skeleton tension may facilitate vesicle excretion. Indeed, treatment of MSCs with Cytochalasin D or functional blockade of integrin β1 caused increased vesicle secretion and decreased cell sizes. Thus, our results suggest that 3D culture reduces MSC size by increasing vesicle excretion which is likely mediated by lowering cytoskeleton tension. Stem Cells 2018;36:286-292. © 2017 AlphaMed Press.

  3. Chronic administration of OB protein decreases food intake by selectively reducing meal size in female rats.

    Science.gov (United States)

    Eckel, L A; Langhans, W; Kahler, A; Campfield, L A; Smith, F J; Geary, N

    1998-07-01

    The mechanisms by which OB protein controls food intake and energy balance are unknown. Therefore, we investigated the effects of a novel modified human recombinant OB protein (Mod-OB) on spontaneous feeding patterns, body weight, running wheel activity, and ovarian cycling in female rats. Mod-OB or vehicle was injected (4 mg . kg-1 . day-1 sc) for 2 ovarian cycles (8 days) using a within-subjects design. Observations were continued for five ovarian cycles after injections; treatments were then reversed. Mod-OB reduced food intake approximately 20% from injection day 1 to postinjection day 2. Body weight was reduced from injection day 3 to postinjection day 15 (maximum decrease, 25 +/- 4 g, postinjection days 3 and 4). Food intake was reduced due to decreases in nocturnal meal size, which appeared to be superimposed on the normal pattern of spontaneous feeding (i.e., reductions in meal size at estrus). Mod-OB did not significantly affect diurnal food intake or meal patterns, failed to alter wheel running, and did not disrupt the rats' ovarian cycles. We conclude that chronically administered Mod-OB reduces food intake in female rats by selectively affecting the mechanisms controlling meal size.

  4. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    scores in a number of estimators (the above-mentioned plus ICE, Chao2, Michaelis-Menten, Negative Exponential and Clench). The estimations from those four sample sizes were also highly correlated. 4.  Contrary to other studies, we conclude that most species richness estimators may be useful......Fifteen species richness estimators (three asymptotic based on species accumulation curves, 11 nonparametric, and one based in the species-area relationship) were compared by examining their performance in estimating the total species richness of epigean arthropods in the Azorean Laurisilva forests...... different sampling units on species richness estimations. 2.  Estimated species richness scores depended both on the estimator considered and on the grain size used to aggregate data. However, several estimators (ACE, Chao1, Jackknife1 and 2 and Bootstrap) were precise in spite of grain variations. Weibull...

  5. Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test

    Science.gov (United States)

    Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke

    Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.

  6. Decision rules and associated sample size planning for regional approval utilizing multiregional clinical trials.

    Science.gov (United States)

    Chen, Xiaoyuan; Lu, Nelson; Nair, Rajesh; Xu, Yunling; Kang, Cailian; Huang, Qin; Li, Ning; Chen, Hongzhuan

    2012-09-01

    Multiregional clinical trials provide the potential to make safe and effective medical products simultaneously available to patients globally. As regulatory decisions are always made in a local context, this poses huge regulatory challenges. In this article we propose two conditional decision rules that can be used for medical product approval by local regulatory agencies based on the results of a multiregional clinical trial. We also illustrate sample size planning for such trials.

  7. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    OpenAIRE

    Mark Heckmann; Lukas Burk

    2017-01-01

    The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...

  8. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    OpenAIRE

    Bruno Giacomini Sari; Alessandro Dal’Col Lúcio; Cinthya Souza Santana; Dionatan Ketzer Krysczun; André Luís Tischler; Lucas Drebes

    2017-01-01

    ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix b...

  9. Epidemiological Studies Based on Small Sample Sizes – A Statistician's Point of View

    OpenAIRE

    Ersbøll Annette; Ersbøll Bjarne

    2003-01-01

    We consider 3 basic steps in a study, which have relevance for the statistical analysis. They are: study design, data quality, and statistical analysis. While statistical analysis is often considered an important issue in the literature and the choice of statistical method receives much attention, less emphasis seems to be put on study design and necessary sample sizes. Finally, a very important step, namely assessment and validation of the quality of the data collected seems to be completel...

  10. Sample size calculations for randomised trials including both independent and paired data.

    Science.gov (United States)

    Yelland, Lisa N; Sullivan, Thomas R; Price, David J; Lee, Katherine J

    2017-04-15

    Randomised trials including a mixture of independent and paired data arise in many areas of health research, yet methods for determining the sample size for such trials are lacking. We derive design effects algebraically assuming clustering because of paired data will be taken into account in the analysis using generalised estimating equations with either an independence or exchangeable working correlation structure. Continuous and binary outcomes are considered, along with three different methods of randomisation: cluster randomisation, individual randomisation and randomisation to opposite treatment groups. The design effect is shown to depend on the intracluster correlation coefficient, proportion of observations belonging to a pair, working correlation structure, type of outcome and method of randomisation. The derived design effects are validated through simulation and example calculations are presented to illustrate their use in sample size planning. These design effects will enable appropriate sample size calculations to be performed for future randomised trials including both independent and paired data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Sample size determinations for Welch's test in one-way heteroscedastic ANOVA.

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2014-02-01

    For one-way fixed effects ANOVA, it is well known that the conventional F test of the equality of means is not robust to unequal variances, and numerous methods have been proposed for dealing with heteroscedasticity. On the basis of extensive empirical evidence of Type I error control and power performance, Welch's procedure is frequently recommended as the major alternative to the ANOVA F test under variance heterogeneity. To enhance its practical usefulness, this paper considers an important aspect of Welch's method in determining the sample size necessary to achieve a given power. Simulation studies are conducted to compare two approximate power functions of Welch's test for their accuracy in sample size calculations over a wide variety of model configurations with heteroscedastic structures. The numerical investigations show that Levy's (1978a) approach is clearly more accurate than the formula of Luh and Guo (2011) for the range of model specifications considered here. Accordingly, computer programs are provided to implement the technique recommended by Levy for power calculation and sample size determination within the context of the one-way heteroscedastic ANOVA model. © 2013 The British Psychological Society.

  12. Estimating the Size of a Large Network and its Communities from a Random Sample

    CERN Document Server

    Chen, Lin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...

  13. A Bayesian adaptive blinded sample size adjustment method for risk differences.

    Science.gov (United States)

    Hartley, Andrew Montgomery

    2015-01-01

    Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non-inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Sample Size and Probability Threshold Considerations with the Tailored Data Method.

    Science.gov (United States)

    Wyse, Adam E

    This article discusses sample size and probability threshold considerations in the use of the tailored data method with the Rasch model. In the tailored data method, one performs an initial Rasch analysis and then reanalyzes data after setting item responses to missing that are below a chosen probability threshold. A simple analytical formula is provided that can be used to check whether or not the application of the tailored data method with a chosen probability threshold will create situations in which the number of remaining item responses for the Rasch calibration will or will not meet minimum sample size requirements. The formula is illustrated using a real data example from a medical imaging licensure exam with several different probability thresholds. It is shown that as the probability threshold was increased more item responses were set to missing and the parameter standard errors and item difficulty estimates also tended to increase. It is suggested that some consideration should be given to the chosen probability threshold and how this interacts with potential examinee sample sizes and the accuracy of parameter estimates when calibrating data with the tailored data method.

  15. Small portion sizes in worksite cafeterias: do they help consumers to reduce their food intake?

    Science.gov (United States)

    Vermeer, W M; Steenhuis, I H M; Leeuwis, F H; Heymans, M W; Seidell, J C

    2011-09-01

    Environmental interventions directed at portion size might help consumers to reduce their food intake. To assess whether offering a smaller hot meal, in addition to the existing size, stimulates people to replace their large meal with a smaller meal. Longitudinal randomized controlled trial assessing the impact of introducing small portion sizes and pricing strategies on consumer choices. In all, 25 worksite cafeterias and a panel consisting of 308 consumers (mean age=39.18 years, 50% women). A small portion size of hot meals was offered in addition to the existing size. The meals were either proportionally priced (that is, the price per gram was comparable regardless of the size) or value size pricing was employed. Daily sales of small and the total number of meals, consumers' self-reported compensation behavior and frequency of purchasing small meals. The ratio of small meals sales in relation to large meals sales was 10.2%. No effect of proportional pricing was found B=-0.11 (0.33), P=0.74, confidence interval (CI): -0.76 to 0.54). The consumer data indicated that 19.5% of the participants who had selected a small meal often-to-always purchased more products than usual in the worksite cafeteria. Small meal purchases were negatively related to being male (B=-0.85 (0.20), P=0.00, CI: -1.24 to -0.46, n=178). When offering a small meal in addition to the existing size, a percentage of consumers that is considered reasonable were inclined to replace the large meal with the small meal. Proportional prices did not have an additional effect. The possible occurrence of compensation behavior is an issue that merits further attention.

  16. Reduced clot debris size using standing waves formed via high intensity focused ultrasound

    Science.gov (United States)

    Guo, Shifang; Du, Xuan; Wang, Xin; Lu, Shukuan; Shi, Aiwei; Xu, Shanshan; Bouakaz, Ayache; Wan, Mingxi

    2017-09-01

    The feasibility of utilizing high intensity focused ultrasound (HIFU) to induce thrombolysis has been demonstrated previously. However, clinical concerns still remain related to the clot debris produced via fragmentation of the original clot potentially being too large and hence occluding downstream vessels, causing hazardous emboli. This study investigates the use of standing wave fields formed via HIFU to disintegrate the thrombus while achieving a reduced clot debris size in vitro. The results showed that the average diameter of the clot debris calculated by volume percentage was smaller in the standing wave mode than in the travelling wave mode at identical ultrasound thrombolysis settings. Furthermore, the inertial cavitation dose was shown to be lower in the standing wave mode, while the estimated cavitation bubble size distribution was similar in both modes. These results show that a reduction of the clot debris size with standing waves may be attributed to the particle trapping of the acoustic potential well which contributed to particle fragmentation.

  17. SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA

    Directory of Open Access Journals (Sweden)

    S FAGHIHZADEH

    2003-06-01

    Full Text Available Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the result of sample size determination in non-randomnized surval analysis with censored and non -censored data. Methods: In non-randomnized survival studies, linear regression with fixed -effect variable could be used. In fact such a regression is conditional expectation of dependent variable, conditioned on independent variable. Likelihood fuction with exponential hazard constructed by considering binary variable for allocation of each subject to one of two comparing groups, stating the variance of coefficient of fixed - effect independent variable by determination coefficient , sample size determination formulas are obtained with both censored and non-cencored data. So estimation of sample size is not based on the relation of a single independent variable but it could be attain the required power for a test adjusted for effect of the other explanatory covariates. Since the asymptotic distribution of the likelihood estimator of parameter is normal, we obtained the variance of the regression coefficient estimator formula then by stating the variance of regression coefficient of fixed-effect variable, by determination coefficient we derived formulas for determination of sample size in both censored and non-censored data. Results: In no-randomnized survival analysis ,to compare hazard rates of two groups without censored data, we obtained an estimation of determination coefficient ,risk ratio and proportion of membership to each group and their variances from

  18. Concentration Effect of Reducing Agents on Green Synthesis of Gold Nanoparticles: Size, Morphology, and Growth Mechanism

    Science.gov (United States)

    Kim, Hyun-seok; Seo, Yu Seon; Kim, Kyeounghak; Han, Jeong Woo; Park, Youmie; Cho, Seonho

    2016-04-01

    Under various concentration conditions of reducing agents during the green synthesis of gold nanoparticles (AuNPs), we obtain the various geometry (morphology and size) of AuNPs that play a crucial role in their catalytic properties. Through both theoretical and experimental approaches, we studied the relationship between the concentration of reducing agent (caffeic acid) and the geometry of AuNPs. As the concentration of caffeic acid increases, the sizes of AuNPs were decreased due to the adsorption and stabilizing effect of oxidized caffeic acids (OXCAs). Thus, it turns out that optimal concentration exists for the desired geometry of AuNPs. Furthermore, we investigated the growth mechanism for the green synthesis of AuNPs. As the caffeic acid is added and adsorbed on the surface of AuNPs, the aggregation mechanism and surface free energy are changed and consequently resulted in the AuNPs of various geometry.

  19. Sample size determination for a t test given a t value from a previous study: A FORTRAN 77 program.

    Science.gov (United States)

    Gillett, R

    2001-11-01

    When uncertain about the magnitude of an effect, researchers commonly substitute in the standard sample-size-determination formula an estimate of effect size derived from a previous experiment. A problem with this approach is that the traditional sample-size-determination formula was not designed to deal with the uncertainty inherent in an effect-size estimate. Consequently, estimate-substitution in the traditional sample-size-determination formula can lead to a substantial loss of power. A method of sample-size determination designed to handle uncertainty in effect-size estimates is described. The procedure uses the t value and sample size from a previous study, which might be a pilot study or a related study in the same area, to establish a distribution of probable effect sizes. The sample size to be employed in the new study is that which supplies an expected power of the desired amount over the distribution of probable effect sizes. A FORTRAN 77 program is presented that permits swift calculation of sample size for a variety of t tests, including independent t tests, related t tests, t tests of correlation coefficients, and t tests of multiple regression b coefficients.

  20. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Prediction accuracy of a sample-size estimation method for ROC studies.

    Science.gov (United States)

    Chakraborty, Dev P

    2010-05-01

    Sample-size estimation is an important consideration when planning a receiver operating characteristic (ROC) study. The aim of this work was to assess the prediction accuracy of a sample-size estimation method using the Monte Carlo simulation method. Two ROC ratings simulators characterized by low reader and high case variabilities (LH) and high reader and low case variabilities (HL) were used to generate pilot data sets in two modalities. Dorfman-Berbaum-Metz multiple-reader multiple-case (DBM-MRMC) analysis of the ratings yielded estimates of the modality-reader, modality-case, and error variances. These were input to the Hillis-Berbaum (HB) sample-size estimation method, which predicted the number of cases needed to achieve 80% power for 10 readers and an effect size of 0.06 in the pivotal study. Predictions that generalized to readers and cases (random-all), to cases only (random-cases), and to readers only (random-readers) were generated. A prediction-accuracy index defined as the probability that any single prediction yields true power in the 75%-90% range was used to assess the HB method. For random-case generalization, the HB-method prediction-accuracy was reasonable, approximately 50% for five readers and 100 cases in the pilot study. Prediction-accuracy was generally higher under LH conditions than under HL conditions. Under ideal conditions (many readers in the pilot study) the DBM-MRMC-based HB method overestimated the number of cases. The overestimates could be explained by the larger modality-reader variance estimates when reader variability was large (HL). The largest benefit of increasing the number of readers in the pilot study was realized for LH, where 15 readers were enough to yield prediction accuracy >50% under all generalization conditions, but the benefit was lesser for HL where prediction accuracy was approximately 36% for 15 readers under random-all and random-reader conditions. The HB method tends to overestimate the number of cases

  2. Generalized sample size determination formulas for experimental research with hierarchical data.

    Science.gov (United States)

    Usami, Satoshi

    2014-06-01

    Hierarchical data sets arise when the data for lower units (e.g., individuals such as students, clients, and citizens) are nested within higher units (e.g., groups such as classes, hospitals, and regions). In data collection for experimental research, estimating the required sample size beforehand is a fundamental question for obtaining sufficient statistical power and precision of the focused parameters. The present research extends previous research from Heo and Leon (2008) and Usami (2011b), by deriving closed-form formulas for determining the required sample size to test effects in experimental research with hierarchical data, and by focusing on both multisite-randomized trials (MRTs) and cluster-randomized trials (CRTs). These formulas consider both statistical power and the width of the confidence interval of a standardized effect size, on the basis of estimates from a random-intercept model for three-level data that considers both balanced and unbalanced designs. These formulas also address some important results, such as the lower bounds of the needed units at the highest levels.

  3. Back to basics: explaining sample size in outcome trials, are statisticians doing a thorough job?

    Science.gov (United States)

    Carroll, Kevin J

    2009-01-01

    Time to event outcome trials in clinical research are typically large, expensive and high-profile affairs. Such trials are commonplace in oncology and cardiovascular therapeutic areas but are also seen in other areas such as respiratory in indications like chronic obstructive pulmonary disease. Their progress is closely monitored and results are often eagerly awaited. Once available, the top line result is often big news, at least within the therapeutic area in which it was conducted, and the data are subsequently fully scrutinized in a series of high-profile publications. In such circumstances, the statistician has a vital role to play in the design, conduct, analysis and reporting of the trial. In particular, in drug development it is incumbent on the statistician to ensure at the outset that the sizing of the trial is fully appreciated by their medical, and other non-statistical, drug development team colleagues and that the risk of delivering a statistically significant but clinically unpersuasive result is minimized. The statistician also has a key role in advising the team when, early in the life of an outcomes trial, a lower than anticipated event rate appears to be emerging. This paper highlights some of the important features relating to outcome trial sample sizing and makes a number of simple recommendations aimed at ensuring a better, common understanding of the interplay between sample size and power and the final result required to provide a statistically positive and clinically persuasive outcome. Copyright (c) 2009 John Wiley & Sons, Ltd.

  4. Complement factor 5 blockade reduces porcine myocardial infarction size and improves immediate cardiac function.

    Science.gov (United States)

    Pischke, Soeren E; Gustavsen, A; Orrem, H L; Egge, K H; Courivaud, F; Fontenelle, H; Despont, A; Bongoni, A K; Rieben, R; Tønnessen, T I; Nunn, M A; Scott, H; Skulstad, H; Barratt-Due, A; Mollnes, T E

    2017-05-01

    Inhibition of complement factor 5 (C5) reduced myocardial infarction in animal studies, while no benefit was found in clinical studies. Due to lack of cross-reactivity of clinically used C5 antibodies, different inhibitors were used in animal and clinical studies. Coversin (Ornithodoros moubata complement inhibitor, OmCI) blocks C5 cleavage and binds leukotriene B4 in humans and pigs. We hypothesized that inhibition of C5 before reperfusion will decrease infarct size and improve ventricular function in a porcine model of myocardial infarction. In pigs (Sus scrofa), the left anterior descending coronary artery was occluded (40 min) and reperfused (240 min). Coversin or placebo was infused 20 min after occlusion and throughout reperfusion in 16 blindly randomized pigs. Coversin significantly reduced myocardial infarction in the area at risk by 39% (p = 0.03, triphenyl tetrazolium chloride staining) and by 19% (p = 0.02) using magnetic resonance imaging. The methods correlated significantly (R = 0.92, p infarcted area at risk by coversin treatment. Coversin ablated plasma C5 activation throughout the reperfusion period and decreased myocardial C5b-9 deposition, while neither plasma nor myocardial LTB4 were significantly reduced. Coversin substantially reduced the size of infarction, improved ventricular function, and attenuated interleukin-1β and E-selectin in this porcine model by inhibiting C5. We conclude that inhibition of C5 in myocardial infarction should be reconsidered.

  5. Development of a depth-integrated sample arm (DISA) to reduce solids stratification bias in stormwater sampling

    Science.gov (United States)

    Selbig, William R.; ,; Roger T. Bannerman,

    2011-01-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.

  6. Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force

    Energy Technology Data Exchange (ETDEWEB)

    Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R

    2008-05-22

    We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.

  7. Morphine Reduces Myocardial Infarct Size via Heat Shock Protein 90 in Rodents

    Directory of Open Access Journals (Sweden)

    Bryce A. Small

    2015-01-01

    Full Text Available Opioids reduce injury from myocardial ischemia-reperfusion in humans. In experimental models, this mechanism involves GSK3β inhibition. HSP90 regulates mitochondrial protein import, with GSK3β inhibition increasing HSP90 mitochondrial content. Therefore, we determined whether morphine-induced cardioprotection is mediated by HSP90 and if the protective effect is downstream of GSK3β inhibition. Male Sprague-Dawley rats, aged 8–10 weeks, were subjected to an in vivo myocardial ischemia-reperfusion injury protocol involving 30 minutes of ischemia followed by 2 hours of reperfusion. Hemodynamics were continually monitored and myocardial infarct size determined. Rats received morphine (0.3 mg/kg, the GSK3β inhibitor, SB216763 (0.6 mg/kg, or saline, 10 minutes prior to ischemia. Some rats received selective HSP90 inhibitors, radicicol (0.3 mg/kg, or deoxyspergualin (DSG, 0.6 mg/kg alone or 5 minutes prior to morphine or SB216763. Morphine reduced myocardial infarct size when compared to control (42 ± 2% versus 60 ± 1%. This protection was abolished by prior treatment of radicicol or DSG (59 ± 1%, 56 ± 2%. GSK3β inhibition also reduced myocardial infarct size (41 ± 2% with HSP90 inhibition by radicicol or DSG partially inhibiting SB216763-induced infarct size reduction (54 ± 3%, 47 ± 1%, resp.. These data suggest that opioid-induced cardioprotection is mediated by HSP90. Part of this protection afforded by HSP90 is downstream of GSK3β, potentially via the HSP-TOM mitochondrial import pathway.

  8. Effect of sample size on the fluid flow through a single fractured granitoid

    Directory of Open Access Journals (Sweden)

    Kunal Kumar Singh

    2016-06-01

    Full Text Available Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stress–permeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, σ3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cylindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters, containing a “rough walled single fracture”. These experiments were performed under varied confining pressure (σ3 = 5–40 MPa, fluid pressure (fp ≤ 25 MPa, and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, σeff., and Q decreases with an increase in σeff.. Also, the effects of sample size and fracture roughness do not persist when σeff. ≥ 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory

  9. Reference calculation of light propagation between parallel planes of different sizes and sampling rates.

    Science.gov (United States)

    Lobaz, Petr

    2011-01-03

    The article deals with a method of calculation of off-axis light propagation between parallel planes using discretization of the Rayleigh-Sommerfeld integral and its implementation by fast convolution. It analyses zero-padding in case of different plane sizes. In case of memory restrictions, it suggests splitting the calculation into tiles and shows that splitting leads to a faster calculation when plane sizes are a lot different. Next, it suggests how to calculate propagation in case of different sampling rates by splitting planes into interleaved tiles and shows this to be faster than zero-padding and direct calculation. Neither the speedup nor memory-saving method decreases accuracy; the aim of the proposed method is to provide reference data that can be compared to the results of faster and less precise methods.

  10. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r* ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  11. Sample-size calculations for multi-group comparison in population pharmacokinetic experiments.

    Science.gov (United States)

    Ogungbenro, Kayode; Aarons, Leon

    2010-01-01

    This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi-group comparison detecting the difference in parameters between groups under mixed-effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.

  12. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  13. Distance software: design and analysis of distance sampling surveys for estimating population size.

    Science.gov (United States)

    Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon Rb; Marques, Tiago A; Burnham, Kenneth P

    2010-02-01

    1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance.2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use.3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated.4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark-recapture distance sampling, which relaxes the assumption of certain detection at zero distance.5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap.6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software.7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods

  14. Investigation of the InAs/GaAs Quantum Dots’ Size: Dependence on the Strain Reducing Layer’s Position

    Directory of Open Access Journals (Sweden)

    Manel Souaf

    2015-07-01

    Full Text Available This work reports on theoretical and experimental investigation of the impact of InAs quantum dots (QDs position with respect to InGaAs strain reducing layer (SRL. The investigated samples are grown by molecular beam epitaxy and characterized by photoluminescence spectroscopy (PL. The QDs optical transition energies have been calculated by solving the three dimensional Schrödinger equation using the finite element methods and taking into account the strain induced by the lattice mismatch. We have considered a lens shaped InAs QDs in a pure GaAs matrix and either with InGaAs strain reducing cap layer or underlying layer. The correlation between numerical calculation and PL measurements allowed us to track the mean buried QDs size evolution with respect to the surrounding matrix composition. The simulations reveal that the buried QDs’ realistic size is less than that experimentally driven from atomic force microscopy observation. Furthermore, the average size is found to be slightly increased for InGaAs capped QDs and dramatically decreased for QDs with InGaAs under layer.

  15. Dealing with varying detection probability, unequal sample sizes and clumped distributions in count data.

    Directory of Open Access Journals (Sweden)

    D Johan Kotze

    Full Text Available Temporal variation in the detectability of a species can bias estimates of relative abundance if not handled correctly. For example, when effort varies in space and/or time it becomes necessary to take variation in detectability into account when data are analyzed. We demonstrate the importance of incorporating seasonality into the analysis of data with unequal sample sizes due to lost traps at a particular density of a species. A case study of count data was simulated using a spring-active carabid beetle. Traps were 'lost' randomly during high beetle activity in high abundance sites and during low beetle activity in low abundance sites. Five different models were fitted to datasets with different levels of loss. If sample sizes were unequal and a seasonality variable was not included in models that assumed the number of individuals was log-normally distributed, the models severely under- or overestimated the true effect size. Results did not improve when seasonality and number of trapping days were included in these models as offset terms, but only performed well when the response variable was specified as following a negative binomial distribution. Finally, if seasonal variation of a species is unknown, which is often the case, seasonality can be added as a free factor, resulting in well-performing negative binomial models. Based on these results we recommend (a add sampling effort (number of trapping days in our example to the models as an offset term, (b if precise information is available on seasonal variation in detectability of a study object, add seasonality to the models as an offset term; (c if information on seasonal variation in detectability is inadequate, add seasonality as a free factor; and (d specify the response variable of count data as following a negative binomial or over-dispersed Poisson distribution.

  16. A case of liver hemangioma with markedly reduced tumor size after metformin treatment: a case report.

    Science.gov (United States)

    Ono, Minoru; Sawada, Koji; Okumura, Toshikatsu

    2017-02-01

    A 52-year-old man with a 9-year history of hepatic hemangioma was treated with the anti-diabetic drug metformin, resulting in complete remission of the tumor. In 2006, a hemangioma with diameter of 20 × 25 mm was detected incidentally in the liver. The results of imaging studies including ultrasound (US), computed tomography (CT) and magnetic resonance imaging (MRI) were all compatible with that of hepatic hemangioma. The patient consequently underwent imaging annually from 2006 to 2015. The tumor size increased slightly, to 30 × 35 mm in 2012; however, the general tumor characteristics in imaging were not changed. Beginning May 2012, metformin (750 mg/day) was administered because of an increase in blood sugar and hemoglobin A1c levels. After the start of metformin treatment, the tumor size on US gradually decreased. Finally, in October 2015, the tumor was no longer detected. Dynamic CT study also demonstrated markedly reduced tumor size, with a decrease of 2-3 mm in diameter. These results indicate that metformin treatment strongly suppressed cell proliferation in liver hemangioma. The anti-angiogenic effect of metformin was indicated as a possible cause of the reduction in tumor size.

  17. High-dimensional, massive sample-size Cox proportional hazards regression for survival analysis.

    Science.gov (United States)

    Mittal, Sushil; Madigan, David; Burd, Randall S; Suchard, Marc A

    2014-04-01

    Survival analysis endures as an old, yet active research field with applications that spread across many domains. Continuing improvements in data acquisition techniques pose constant challenges in applying existing survival analysis methods to these emerging data sets. In this paper, we present tools for fitting regularized Cox survival analysis models on high-dimensional, massive sample-size (HDMSS) data using a variant of the cyclic coordinate descent optimization technique tailored for the sparsity that HDMSS data often present. Experiments on two real data examples demonstrate that efficient analyses of HDMSS data using these tools result in improved predictive performance and calibration.

  18. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

    Energy Technology Data Exchange (ETDEWEB)

    Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)

    2010-01-15

    In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

  19. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark

    National Research Council Canada - National Science Library

    Adrian Sayers; Michael J Crowther; Andrew Judge; Michael R Whitehouse; Ashley W Blom

    2017-01-01

    ... to the performance benchmark of interest. We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking...

  20. Power and sample size calculation for paired recurrent events data based on robust nonparametric tests.

    Science.gov (United States)

    Su, Pei-Fang; Chung, Chia-Hua; Wang, Yu-Wen; Chi, Yunchan; Chang, Ying-Ju

    2017-05-20

    The purpose of this paper is to develop a formula for calculating the required sample size for paired recurrent events data. The developed formula is based on robust non-parametric tests for comparing the marginal mean function of events between paired samples. This calculation can accommodate the associations among a sequence of paired recurrent event times with a specification of correlated gamma frailty variables for a proportional intensity model. We evaluate the performance of the proposed method with comprehensive simulations including the impacts of paired correlations, homogeneous or nonhomogeneous processes, marginal hazard rates, censoring rate, accrual and follow-up times, as well as the sensitivity analysis for the assumption of the frailty distribution. The use of the formula is also demonstrated using a premature infant study from the neonatal intensive care unit of a tertiary center in southern Taiwan. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. The Role of Social Norms in the Portion Size Effect: Reducing Normative Relevance Reduces the Effect of Portion Size on Consumption Decisions

    Science.gov (United States)

    Versluis, Iris; Papies, Esther K.

    2016-01-01

    People typically eat more from large portions of food than from small portions. An explanation that has often been given for this so-called portion size effect is that the portion size acts as a social norm and as such communicates how much is appropriate to eat. In this paper, we tested this explanation by examining whether manipulating the relevance of the portion size as a social norm changes the portion size effect, as assessed by prospective consumption decisions. We conducted one pilot experiment and one full experiment in which participants respectively indicated how much they would eat or serve themselves from a given amount of different foods. In the pilot (N = 63), we manipulated normative relevance by allegedly basing the portion size on the behavior of either students of the own university (in-group) or of another university (out-group). In the main experiment (N = 321), we told participants that either a minority or majority of people similar to them approved of the portion size. Results show that in both experiments, participants expected to serve themselves and to eat more from larger than from smaller portions. As expected, however, the portion size effect was less pronounced when the reference portions were allegedly based on the behavior of an out-group (pilot) or approved only by a minority (main experiment). These findings suggest that the portion size indeed provides normative information, because participants were less influenced by it if it communicated the behaviors or values of a less relevant social group. In addition, in the main experiment, the relation between portion size and the expected amount served was partially mediated by the amount that was considered appropriate, suggesting that concerns about eating an appropriate amount indeed play a role in the portion size effect. However, since the portion size effect was weakened but not eliminated by the normative relevance manipulations and since mediation was only partial, other

  2. Sample size for regression analyses of theory of planned behaviour studies: case of prescribing in general practice.

    Science.gov (United States)

    Rashidian, Arash; Miles, Jeremy; Russell, Daphne; Russell, Ian

    2006-11-01

    Interest has been growing in the use of the theory of planned behaviour (TBP) in health services research. The sample sizes range from less than 50 to more than 750 in published TPB studies without sample size calculations. We estimate the sample size for a multi-stage random survey of prescribing intention and actual prescribing for asthma in British general practice. To our knowledge, this is the first systematic attempt to determine sample size for a TPB survey. We use two different approaches: reported values of regression models' goodness-of-fit (the lambda method) and zero-order correlations (the variance inflation factor or VIF method). Intra-cluster correlation coefficient (ICC) is estimated and a socioeconomic variable is used for stratification. We perform sensitivity analysis to estimate the effects of our decisions on final sample size. The VIF method is more sensitive to the requirements of a TPB study. Given a correlation of .25 between intention and behaviour, and of .4 between intention and perceived behavioural control, the proposed sample size is 148. We estimate the ICC for asthma prescribing to be around 0.07. If 10 general practitioners were sampled per cluster, the sample size would be 242. It is feasible to perform sophisticated sample size calculations for a TPB study. The VIF is the appropriate method. Our approach can be used with adjustments in other settings and for other regression models.

  3. Phenomena of insulin peak fronting in size exclusion chromatography and strategies to reduce fronting.

    Science.gov (United States)

    Yu, Chi-Ming; Mun, Sungyong; Wang, Nien-Hwa Linda

    2008-05-23

    Insulin peak fronting in size exclusion chromatography (SEC) results in more than 10% yield loss in the production of insulin. The goal of this study is to understand the mechanisms of peak fronting and to develop strategies to reduce fronting and increase insulin yield. Chromatography experiments ruled out pressure surge, viscous fingering, and adsorption as the cause for peak fronting. Theoretical analysis based on a general rate model indicated that reversible dimerization is the major cause for peak fronting and reducing the dimerization equilibrium constant is the most effective method for reducing fronting. Two strategies were developed and tested to reduce the degree of dimer formation. The first strategy was to use 0.1N acetic acid as the presaturant and eluent. The second strategy was to use 0.8 or 2.8N acetic acid in 20vol.% denatured ethanol as the mobile phase. The experimental results showed that both strategies can reduce insulin peak fronting in SEC, maintain desired product purity, and increase insulin yield to higher than 98%.

  4. Can reduced size of metals induce hydrogen absorption: ZrAl{sub 2} case

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, I., E-mail: izi@bgu.ac.il [Department of Nuclear Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105 (Israel); Deledda, S. [Physics Department, Institute for Energy Technology, P.O. Box 40, NO-2027 Kjeller (Norway); Bereznitsky, M. [Department of Nuclear Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105 (Israel); Yeheskel, O. [Nuclear Research Center - Negev, P.O. Box 9001, Beer Sheva 84190 (Israel); Filipek, S.M. [Institute of Physical Chemistry, Polish Academy of Sciences, 01-224 Warsaw (Poland); Mogilyanski, D.; Kimmel, G. [Institute for Applied Research, P.O. Box 653, Ben-Gurion University of the Negev, Beer Sheva 84105 (Israel); Hauback, B.C. [Physics Department, Institute for Energy Technology, P.O. Box 40, NO-2027 Kjeller (Norway)

    2011-09-15

    Research highlights: > 15 nm particles of ZrAl{sub 2} and Zr(Al{sub 0.5}Co{sub 0.5}){sub 2} are obtained by attrition and cryomilling. > ZrAl{sub 2} nanoparticles remain inert to hydrogen absorption up to pressure of {approx}2 GPa. > Zr(Al{sub 0.5}Co{sub 0.5}){sub 2} nanoparticles exhibit reduced hydrogen absorption as compared to the corresponding bulk compounds. - Abstract: The hydrogen absorption ability of the non-absorbing Al-rich ZrAl{sub 2} compound was examined after reducing its particles-size to the nanometer regime. The hydrogen abstinence of bulk ZrAl{sub 2} has been previously related to its excessive elastic shear stiffening. The particle size of ZrAl{sub 2} was reduced by attrition milling and cryomilling. The minimal average particle size was estimated from powder X-ray diffraction analysis to be in the range of 10-20 nm. The hydrogen absorption of the milled compounds was measured in different hydrogenation systems at hydrogen pressures between {approx}6 MPa and {approx}2 GPa. In all the cases the hydrogen absorption was negligible. In addition, there was a reduction of the hydrogen absorption capacity of nanosized Zr(Al{sub 0.5}Co{sub 0.5}){sub 2} as compared to the corresponding bulk compound at the same conditions. We suggest, in view of our and other results, that no significant improvement of the thermodynamics (unlike the kinetics) of the hydrogen absorption can be achieved via the nanoparticle avenue.

  5. Grain dissection as a grain size reducing mechanism during ice microdynamics

    Science.gov (United States)

    Steinbach, Florian; Kuiper, Ernst N.; Eichler, Jan; Bons, Paul D.; Drury, Martin R.; Griera, Albert; Pennock, Gill M.; Weikusat, Ilka

    2017-04-01

    Ice sheets are valuable paleo-climate archives, but can lose their integrity by ice flow. An understanding of the microdynamic mechanisms controlling the flow of ice is essential when assessing climatic and environmental developments related to ice sheets and glaciers. For instance, the development of a consistent mechanistic grain size law would support larger scale ice flow models. Recent research made significant progress in numerically modelling deformation and recrystallisation mechanisms in the polycrystalline ice and ice-air aggregate (Llorens et al., 2016a,b; Steinbach et al., 2016). The numerical setup assumed grain size reduction is achieved by the progressive transformation of subgrain boundaries into new high angle grain boundaries splitting an existing grain. This mechanism is usually termed polygonisation. Analogue experiments suggested, that strain induced grain boundary migration can cause bulges to migrate through the whole of a grain separating one region of the grain from another (Jessell, 1986; Urai, 1987). This mechanism of grain dissection could provide an alternative grain size reducing mechanism, but has not yet been observed during ice microdynamics. In this contribution, we present results using an updated numerical approach allowing for grain dissection. The approach is based on coupling the full field theory crystal visco-plasticity code (VPFFT) of Lebensohn (2001) to the multi-process modelling platform Elle (Bons et al., 2008). VPFFT predicts the mechanical fields resulting from short strain increments, dynamic recrystallisation process are implemented in Elle. The novel approach includes improvements to allow for grain dissection, which was topologically impossible during earlier simulations. The simulations are supported by microstructural observations from NEEM (North Greenland Eemian Ice Drilling) ice core. Mappings of c-axis orientations using the automatic fabric analyser and full crystallographic orientations using electron

  6. Alternaria and Fusarium in Norwegian grains of reduced quality - a matched pair sample study

    DEFF Research Database (Denmark)

    Kosiak, B.; Torp, M.; Skjerve, E.

    2004-01-01

    The occurrence and geographic distribution of species belonging to the genera Alternaria and Fusarium in grains of reduced and of acceptable quality were studied post-harvest in 1997 and 1998. A total of 260 grain samples of wheat, barley and oats was analysed. The distribution of Alternaria...... and Fusarium spp. varied significantly in samples of reduced quality compared with acceptable samples. Alternaria spp. dominated in the acceptable samples with A. infectoria group as the most frequently isolated and most abundant species group of this genus while Fusarium spp. dominated in samples of reduced...... of reduced quality. The results indicated a negative interaction between E graminearum and Alternaria spp. as well as between F graminearum and other Fusarium spp....

  7. Reducing Data Size Inequality during Finite Element Model Separation into Superelements

    Directory of Open Access Journals (Sweden)

    Yu. V. Berchun

    2015-01-01

    Full Text Available The work considers two methods of automatic separation of final element model into super-elements to decrease computing resource demand when solving the linearly - elastic problems of solid mechanics. The first method represents an algorithm to separate a final element grid into simply connected sub-regions according to the set specific number of nodes in the super-element. The second method is based on the generation of a super-element with the set specific data size of the coefficient matrix of the system of equations of the internal nodes balance, which are eliminated during super-element transformation. Both methods are based on the theory of graphs. The data size of a matrix of coefficients is assessed on the assumption that the further solution of a task will use Holetsky’s method. Before assessment of data size, a KatkhillaMackey's (Cuthill-McKee algorithm renumbers the internal nodes of a super-element both to decrease a profile width of the appropriate matrix of the system of equations of balance and to reduce the number of nonzero elements. Test examples show work results of abovementioned methods compared in terms of inequality of generated super-element separation according to the number of nodes and data size of the coefficient matrix of the system of equations of the internal nodes balance. It is shown that the offered approach provides smaller inequality of data size of super-element matrixes, with slightly increasing inequality by the number of tops.

  8. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  9. Sample size calculations for micro-randomized trials in mHealth.

    Science.gov (United States)

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A

    2016-05-30

    The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Power and sample size determination for measures of environmental impact in aquatic systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammann, L.P. [Univ. of Texas, Richardson, TX (United States); Dickson, K.L.; Waller, W.T.; Kennedy, J.H. [Univ. of North Texas, Denton, TX (United States); Mayer, F.L.; Lewis, M. [Environmental Protection Agency, Gulf Breeze, FL (United States)

    1994-12-31

    To effectively monitor the status of various freshwater and estuarine ecological systems, it is necessary to understand the statistical power associated with the measures of ecological health that are appropriate for each system. These power functions can then be used to determine sample sizes that are required to attain targeted change detection likelihoods. A number of different measures have been proposed and are used for such monitoring. these include diversity and evenness indices, richness, and organisms counts. Power functions can be estimated when preliminary or historical data are available for the region and system of interest. Unfortunately, there are a number of problems associated with the computation of power functions and sample sizes for these measures. These problems include the presence of outliers, co-linearity among the variables, and non-normality of count data. The problems, and appropriate methods to compute the power functions, for each of the commonly employed measures of ecological health will be discussed. In addition, the relationship between power and the level of taxonomic classification used to compute the measures of diversity, evenness, richness, and organism counts will be discussed. Methods for computation of the power functions will be illustrated using data sets from previous EPA studies.

  11. On tests of treatment-covariate interactions: An illustration of appropriate power and sample size calculations.

    Science.gov (United States)

    Shieh, Gwowen

    2017-01-01

    The appraisals of treatment-covariate interaction have theoretical and substantial implications in all scientific fields. Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA. A fundamental assumption of ANCOVA is that the regression slopes associating the response variable with the covariate variable are presumed constant across treatment groups. The validity of homogeneous regression slopes accordingly is the most essential concern in traditional ANCOVA and inevitably determines the practical usefulness of research findings. In view of the limited results in current literature, this article aims to present power and sample size procedures for tests of heterogeneity between two regression slopes with particular emphasis on the stochastic feature of covariate variables. Theoretical implications and numerical investigations are presented to explicate the utility and advantage for accommodating covariate properties. The exact approach has the distinct feature of accommodating the full distributional properties of normal covariates whereas the simplified approximate methods only utilize the partial information of covariate variances. According to the overall accuracy and robustness, the exact approach is recommended over the approximate methods as a reliable tool in practical applications. The suggested power and sample size calculations can be implemented with the supplemental SAS and R programs.

  12. On tests of treatment-covariate interactions: An illustration of appropriate power and sample size calculations.

    Directory of Open Access Journals (Sweden)

    Gwowen Shieh

    Full Text Available The appraisals of treatment-covariate interaction have theoretical and substantial implications in all scientific fields. Methodologically, the detection of interaction between categorical treatment levels and continuous covariate variables is analogous to the homogeneity of regression slopes test in the context of ANCOVA. A fundamental assumption of ANCOVA is that the regression slopes associating the response variable with the covariate variable are presumed constant across treatment groups. The validity of homogeneous regression slopes accordingly is the most essential concern in traditional ANCOVA and inevitably determines the practical usefulness of research findings. In view of the limited results in current literature, this article aims to present power and sample size procedures for tests of heterogeneity between two regression slopes with particular emphasis on the stochastic feature of covariate variables. Theoretical implications and numerical investigations are presented to explicate the utility and advantage for accommodating covariate properties. The exact approach has the distinct feature of accommodating the full distributional properties of normal covariates whereas the simplified approximate methods only utilize the partial information of covariate variances. According to the overall accuracy and robustness, the exact approach is recommended over the approximate methods as a reliable tool in practical applications. The suggested power and sample size calculations can be implemented with the supplemental SAS and R programs.

  13. Sample Size Considerations of Prediction-Validation Methods in High-Dimensional Data for Survival Outcomes

    Science.gov (United States)

    Pang, Herbert; Jung, Sin-Ho

    2013-01-01

    A variety of prediction methods are used to relate high-dimensional genome data with a clinical outcome using a prediction model. Once a prediction model is developed from a data set, it should be validated using a resampling method or an independent data set. Although the existing prediction methods have been intensively evaluated by many investigators, there has not been a comprehensive study investigating the performance of the validation methods, especially with a survival clinical outcome. Understanding the properties of the various validation methods can allow researchers to perform more powerful validations while controlling for type I error. In addition, sample size calculation strategy based on these validation methods is lacking. We conduct extensive simulations to examine the statistical properties of these validation strategies. In both simulations and a real data example, we have found that 10-fold cross-validation with permutation gave the best power while controlling type I error close to the nominal level. Based on this, we have also developed a sample size calculation method that will be used to design a validation study with a user-chosen combination of prediction. Microarray and genome-wide association studies data are used as illustrations. The power calculation method in this presentation can be used for the design of any biomedical studies involving high-dimensional data and survival outcomes. PMID:23471879

  14. A comparison of different estimation methods for simulation-based sample size determination in longitudinal studies

    Science.gov (United States)

    Bahçecitapar, Melike Kaya

    2017-07-01

    Determining sample size necessary for correct results is a crucial step in the design of longitudinal studies. Simulation-based statistical power calculation is a flexible approach to determine number of subjects and repeated measures of longitudinal studies especially in complex design. Several papers have provided sample size/statistical power calculations for longitudinal studies incorporating data analysis by linear mixed effects models (LMMs). In this study, different estimation methods (methods based on maximum likelihood (ML) and restricted ML) with different iterative algorithms (quasi-Newton and ridge-stabilized Newton-Raphson) in fitting LMMs to generated longitudinal data for simulation-based power calculation are compared. This study examines statistical power of F-test statistics for parameter representing difference in responses over time from two treatment groups in the LMM with a longitudinal covariate. The most common procedures in SAS, such as PROC GLIMMIX using quasi-Newton algorithm and PROC MIXED using ridge-stabilized algorithm are used for analyzing generated longitudinal data in simulation. It is seen that both procedures present similar results. Moreover, it is found that the magnitude of the parameter of interest in the model for simulations affect statistical power calculations in both procedures substantially.

  15. Relationship between the size of the samples and the interpretation of the mercury intrusion results of an artificial sandstone

    NARCIS (Netherlands)

    Dong, H.; Zhang, H.; Zuo, Y.; Gao, P.; Ye, G.

    2018-01-01

    Mercury intrusion porosimetry (MIP) measurements are widely used to determine pore throat size distribution (PSD) curves of porous materials. The pore throat size of porous materials has been used to estimate their compressive strength and air permeability. However, the effect of sample size on

  16. Sampling surface particle size distributions and stability analysis of deep channel in the Pearl River Estuary

    Science.gov (United States)

    Feng, Hao-chuan; Zhang, Wei; Zhu, Yu-liang; Lei, Zhi-yi; Ji, Xiao-mei

    2017-06-01

    Particle size distributions (PSDs) of bottom sediments in a coastal zone are generally multimodal due to the complexity of the dynamic environment. In this paper, bottom sediments along the deep channel of the Pearl River Estuary (PRE) are used to understand the multimodal PSDs' characteristics and the corresponding depositional environment. The results of curve-fitting analysis indicate that the near-bottom sediments in the deep channel generally have a bimodal distribution with a fine component and a relatively coarse component. The particle size distribution of bimodal sediment samples can be expressed as the sum of two lognormal functions and the parameters for each component can be determined. At each station of the PRE, the fine component makes up less volume of the sediments and is relatively poorly sorted. The relatively coarse component, which is the major component of the sediments, is even more poorly sorted. The interrelations between the dynamics and particle size of the bottom sediment in the deep channel of the PRE have also been investigated by the field measurement and simulated data. The critical shear velocity and the shear velocity are calculated to study the stability of the deep channel. The results indicate that the critical shear velocity has a similar distribution over large part of the deep channel due to the similar particle size distribution of sediments. Based on a comparison between the critical shear velocities derived from sedimentary parameters and the shear velocities obtained by tidal currents, it is likely that the depositional area is mainly distributed in the northern part of the channel, while the southern part of the deep channel has to face higher erosion risk.

  17. Platelet function investigation by flow cytometry: Sample volume, needle size, and reference intervals.

    Science.gov (United States)

    Pedersen, Oliver Heidmann; Nissen, Peter H; Hvas, Anne-Mette

    2017-09-29

    Flow cytometry is an increasingly used method for platelet function analysis because it has some important advantages compared with other platelet function tests. Flow cytometric platelet function analyses only require a small sample volume (3.5 mL); however, to expand the field of applications, e.g., for platelet function analysis in children, even smaller volumes are needed. Platelets are easily activated, and the size of the needle for blood sampling might be of importance for the pre-activation of the platelets. Moreover, to use flow cytometry for investigation of platelet function in clinical practice, a reference interval is warranted. The aims of this work were 1) to determine if small volumes of whole blood can be used without influencing the results, 2) to examine the pre-activation of platelets with respect to needle size, and 3) to establish reference intervals for flow cytometric platelet function assays. To examine the influence of sample volume, blood was collected from 20 healthy individuals in 1.0 mL, 1.8 mL, and 3.5 mL tubes. To examine the influence of the needle size on pre-activation, blood was drawn from another 13 healthy individuals with both a 19- and 21-gauge needle. For the reference interval study, 78 healthy adults were included. The flow cytometric analyses were performed on a NAVIOS flow cytometer (Beckman Coulter, Miami, Florida) investigating the following activation-dependent markers on the platelet surface; bound-fibrinogen, CD63, and P-selectin (CD62p) after activation with arachidonic acid, ristocetin, adenosine diphosphate, thrombin-receptor-activating-peptide, and collagen. The study showed that a blood volume as low as 1.0 mL can be used for platelet function analysis by flow cytometry and that both a 19- and 21-gauge needle can be used for blood sampling. In addition, reference intervals for platelet function analyses by flow cytometry were established.

  18. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  19. Exploring Alternative Test Form Linking Designs with Modified Equating Sample Size and Anchor Test Length. Research Report. ETS RR-13-02

    Science.gov (United States)

    Wang, Lin; Qian, Jiahe; Lee, Yi-Hsuan

    2013-01-01

    The purpose of this study was to evaluate the combined effects of reduced equating sample size and shortened anchor test length on item response theory (IRT)-based linking and equating results. Data from two independent operational forms of a large-scale testing program were used to establish the baseline results for evaluating the results from…

  20. Is there a critical endometrioma size associated with reduced ovarian responsiveness in assisted reproduction techniques?

    Science.gov (United States)

    Coccia, Maria Elisabetta; Rizzello, Francesca; Barone, Stefano; Pinelli, Sara; Rapalini, Erika; Parri, Cristiana; Caracciolo, Domenico; Papageorgiou, Savvas; Cima, Gianpaolo; Gandini, Loredana

    2014-08-01

    This study investigated the relationships between ovarian endometrioma size, ovarian responsiveness and the number of retrieved oocytes following ovarian stimulation. A prospective study was conducted in a public clinical assisted reproduction centre. A total of 64 infertile women with monolateral endometriomas undergoing IVF or intracytoplasmic sperm injection were included in the study. The total number of follicles, number of follicles ≥ 16 mm and number of oocytes retrieved of ovaries containing endometrioma and normal ovaries were compared. Multivariate linear regression was used to assess whether number of follicles and collected oocytes varied by endometrioma size, age, basal FSH concentration. Significantly lower numbers of follicles ≥ 16 mm (P = 0.024) and oocytes retrieved (P = 0.001) in the ovaries containing endometrioma were observed. In patients with endometriomas ≥ 30 mm, endometrioma size was the most influential contributor to the total number of follicles and oocytes retrieved. Ovarian endometriomas result in reduced response to ovarian stimulation, compared with the response of the contralateral normal ovary in the same individual. In case of endometriomas Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  1. Chronic administration of OB protein decreases food intake by selectively reducing meal size in male rats.

    Science.gov (United States)

    Kahler, A; Geary, N; Eckel, L A; Campfield, L A; Smith, F J; Langhans, W

    1998-07-01

    The potent hypophagic effect of OB protein (OB) is well established, but the mechanism of this effect is largely unknown. We investigated the effects of chronic administration of a novel modified recombinant human OB (Mod-OB) with a prolonged half-life (>48 h) on ad libitum food intake, spontaneous meal patterns, and body weight in 24 adult, male Sprague-Dawley rats (body weight at study onset: 292 g). Single daily subcutaneous injections of Mod-OB (4 mg/kg daily) for 8 consecutive days significantly reduced ad libitum food intake compared with vehicle injections from injection day 3 through postinjection day 3. Mod-OB-injected rats ate between 4.5 and 7.1 g (or 13-20%) per day less than controls, with the reduction primarily occurring during the dark period. Body weight gain was significantly decreased in response to Mod-OB from injection day 8 until postinjection day 4, with a maximum difference of 24 g on postinjection day 3. The reduction of food intake by Mod-OB was mainly due to a 21-34% decrease in nocturnal spontaneous meal size. There was no significant effect of Mod-OB on nocturnal meal frequency or duration. Mod-OB also did not reliably affect the size, duration, or frequency of diurnal meals. Mod-OB-injected rats displayed no compensatory hyperphagia after the injection period. These results indicate that chronically administered OB selectively affects the mechanisms controlling meal size in male rats.

  2. Performance of Disk Mill Type Mechanical Grinder for Size Reducing Process of Robusta Roasted Beans

    Directory of Open Access Journals (Sweden)

    Sri Mulato

    2006-12-01

    Full Text Available One of improtant steps in secondary coffee processing that influence on final product quality such as consistency and uniformity is milling process. Usually, Indonesian smallholder used "lumpang" for milling coffee roasted beans to coffee powder product which caused the final product not uniformed and consistent, and low productivity. Milling process of coffee roasted beans can be done by disk mill type mechanical grinder which is used by smallholder for milling several cereals. Indonesian Coffee and Cocoa Research Institute have developed disk mill type grinding machine for milling coffee roasted beans. Objective of this research is to find performance of disk mill type grinding machine for size reducing process of Robusta roasted beans from several size dried beans and roasting level treatments. Robusta dried beans which are taken from dry processing method have 13—14% moisture content (wet basis, 680—685 kg/m3 density, and classified in 3 sizes level. The result showed that the disk mill type of grinding machine could be used for milling Robusta roasted beans. Machine hascapacity 31—54 kg/h on 5,310—5,610 rpm axle rotation and depend on roasting level. Other technical parameters were 91—98% process efficientcy, 19—31 ml/ kg fuel consumption, 0.3—1% slips, 50—55% particles had diameter less than 230 mesh and 38—44% particles had diameter bigger than 100 mesh, 32—38% lightness was increased, 0.6—12.6% density was decreased, and solubility of coffee powder between 28—30%. Cost milling process per kilogram of Robusta roasted beans which light roast on capacity 30 kg/hour was Rp362.9. Key words : Coffee roasted, Robusta, disk mill, mechanical grinder, size reduction.

  3. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  4. Impact of metric and sample size on determining malaria hotspot boundaries.

    Science.gov (United States)

    Stresman, Gillian H; Giorgi, Emanuele; Baidjoe, Amrish; Knight, Phil; Odongo, Wycliffe; Owaga, Chrispin; Shagari, Shehu; Makori, Euniah; Stevenson, Jennifer; Drakeley, Chris; Cox, Jonathan; Bousema, Teun; Diggle, Peter J

    2017-04-12

    The spatial heterogeneity of malaria suggests that interventions may be targeted for maximum impact. It is unclear to what extent different metrics lead to consistent delineation of hotspot boundaries. Using data from a large community-based malaria survey in the western Kenyan highlands, we assessed the agreement between a model-based geostatistical (MBG) approach to detect hotspots using Plasmodium falciparum parasite prevalence and serological evidence for exposure. Malaria transmission was widespread and highly heterogeneous with one third of the total population living in hotspots regardless of metric tested. Moderate agreement (Kappa = 0.424) was observed between hotspots defined based on parasite prevalence by polymerase chain reaction (PCR)- and the prevalence of antibodies to two P. falciparum antigens (MSP-1, AMA-1). While numerous biologically plausible hotspots were identified, their detection strongly relied on the proportion of the population sampled. When only 3% of the population was sampled, no PCR derived hotspots were reliably detected and at least 21% of the population was needed for reliable results. Similar results were observed for hotspots of seroprevalence. Hotspot boundaries are driven by the malaria diagnostic and sample size used to inform the model. These findings warn against the simplistic use of spatial analysis on available data to target malaria interventions in areas where hotspot boundaries are uncertain.

  5. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  6. Mixed modeling and sample size calculations for identifying housekeeping genes.

    Science.gov (United States)

    Dai, Hongying; Charnigo, Richard; Vyhlidal, Carrie A; Jones, Bridgette L; Bhandary, Madhusudan

    2013-08-15

    Normalization of gene expression data using internal control genes that have biologically stable expression levels is an important process for analyzing reverse transcription polymerase chain reaction data. We propose a three-way linear mixed-effects model to select optimal housekeeping genes. The mixed-effects model can accommodate multiple continuous and/or categorical variables with sample random effects, gene fixed effects, systematic effects, and gene by systematic effect interactions. We propose using the intraclass correlation coefficient among gene expression levels as the stability measure to select housekeeping genes that have low within-sample variation. Global hypothesis testing is proposed to ensure that selected housekeeping genes are free of systematic effects or gene by systematic effect interactions. A gene combination with the highest lower bound of 95% confidence interval for intraclass correlation coefficient and no significant systematic effects is selected for normalization. Sample size calculation based on the estimation accuracy of the stability measure is offered to help practitioners design experiments to identify housekeeping genes. We compare our methods with geNorm and NormFinder by using three case studies. A free software package written in SAS (Cary, NC, U.S.A.) is available at http://d.web.umkc.edu/daih under software tab. Copyright © 2013 John Wiley & Sons, Ltd.

  7. What about N? A methodological study of sample-size reporting in focus group studies

    Science.gov (United States)

    2011-01-01

    Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and

  8. What about N? A methodological study of sample-size reporting in focus group studies.

    Science.gov (United States)

    Carlsen, Benedicte; Glenton, Claire

    2011-03-11

    Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these

  9. What about N? A methodological study of sample-size reporting in focus group studies

    Directory of Open Access Journals (Sweden)

    Glenton Claire

    2011-03-01

    Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method

  10. Estimates of genetic differentiation measured by F(ST do not necessarily require large sample sizes when using many SNP markers.

    Directory of Open Access Journals (Sweden)

    Eva-Maria Willing

    Full Text Available Population genetic studies provide insights into the evolutionary processes that influence the distribution of sequence variants within and among wild populations. F(ST is among the most widely used measures for genetic differentiation and plays a central role in ecological and evolutionary genetic studies. It is commonly thought that large sample sizes are required in order to precisely infer F(ST and that small sample sizes lead to overestimation of genetic differentiation. Until recently, studies in ecological model organisms incorporated a limited number of genetic markers, but since the emergence of next generation sequencing, the panel size of genetic markers available even in non-reference organisms has rapidly increased. In this study we examine whether a large number of genetic markers can substitute for small sample sizes when estimating F(ST. We tested the behavior of three different estimators that infer F(ST and that are commonly used in population genetic studies. By simulating populations, we assessed the effects of sample size and the number of markers on the various estimates of genetic differentiation. Furthermore, we tested the effect of ascertainment bias on these estimates. We show that the population sample size can be significantly reduced (as small as n = 4-6 when using an appropriate estimator and a large number of bi-allelic genetic markers (k>1,000. Therefore, conservation genetic studies can now obtain almost the same statistical power as studies performed on model organisms using markers developed with next-generation sequencing.

  11. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  12. Determination of reference limits: statistical concepts and tools for sample size calculation.

    Science.gov (United States)

    Wellek, Stefan; Lackner, Karl J; Jennen-Steinmetz, Christine; Reinhard, Iris; Hoffmann, Isabell; Blettner, Maria

    2014-12-01

    Reference limits are estimators for 'extreme' percentiles of the distribution of a quantitative diagnostic marker in the healthy population. In most cases, interest will be in the 90% or 95% reference intervals. The standard parametric method of determining reference limits consists of computing quantities of the form X̅±c·S. The proportion of covered values in the underlying population coincides with the specificity obtained when a measurement value falling outside the corresponding reference region is classified as diagnostically suspect. Nonparametrically, reference limits are estimated by means of so-called order statistics. In both approaches, the precision of the estimate depends on the sample size. We present computational procedures for calculating minimally required numbers of subjects to be enrolled in a reference study. The much more sophisticated concept of reference bands replacing statistical reference intervals in case of age-dependent diagnostic markers is also discussed.

  13. Realistic weight perception and body size assessment in a racially diverse community sample of dieters.

    Science.gov (United States)

    Cachelin, F M; Striegel-Moore, R H; Elder, K A

    1998-01-01

    Recently, a shift in obesity treatment away from emphasizing ideal weight loss goals to establishing realistic weight loss goals has been proposed; yet, what constitutes "realistic" weight loss for different populations is not clear. This study examined notions of realistic shape and weight as well as body size assessment in a large community-based sample of African-American, Asian, Hispanic, and white men and women. Participants were 1893 survey respondents who were all dieters and primarily overweight. Groups were compared on various variables of body image assessment using silhouette ratings. No significant race differences were found in silhouette ratings, nor in perceptions of realistic shape or reasonable weight loss. Realistic shape and weight ratings by both women and men were smaller than current shape and weight but larger than ideal shape and weight ratings. Compared with male dieters, female dieters considered greater weight loss to be realistic. Implications of the findings for the treatment of obesity are discussed.

  14. Some basic aspects of statistical methods and sample size determination in health science research.

    Science.gov (United States)

    Binu, V S; Mayya, Shreemathi S; Dhar, Murali

    2014-04-01

    A health science researcher may sometimes wonder "why statistical methods are so important in research?" Simple answer is that, statistical methods are used throughout a study that includes planning, designing, collecting data, analyzing and drawing meaningful interpretation and report the findings. Hence, it is important that a researcher knows the concepts of at least basic statistical methods used at various stages of a research study. This helps the researcher in the conduct of an appropriately well-designed study leading to valid and reliable results that can be generalized to the population. A well-designed study possesses fewer biases, which intern gives precise, valid and reliable results. There are many statistical methods and tests that are used at various stages of a research. In this communication, we discuss the overall importance of statistical considerations in medical research with the main emphasis on estimating minimum sample size for different study objectives.

  15. Dietary red palm oil supplementation reduces myocardial infarct size in an isolated perfused rat heart model

    Directory of Open Access Journals (Sweden)

    Esterhuyse Adriaan J

    2010-06-01

    Full Text Available Abstract Background and Aims Recent studies have shown that dietary red palm oil (RPO supplementation improves functional recovery following ischaemia/reperfusion in isolated hearts. The main aim of this study was to investigate the effects of dietary RPO supplementation on myocardial infarct size after ischaemia/reperfusion injury. The effects of dietary RPO supplementation on matrix metalloproteinase-2 (MMP2 activation and PKB/Akt phosphorylation were also investigated. Materials and methods Male Wistar rats were divided into three groups and fed a standard rat chow diet (SRC, a SRC supplemented with RPO, or a SRC supplemented with sunflower oil (SFO, for a five week period, respectively. After the feeding period, hearts were excised and perfused on a Langendorff perfusion apparatus. Hearts were subjected to thirty minutes of normothermic global ischaemia and two hours of reperfusion. Infarct size was determined by triphenyltetrazolium chloride staining. Coronary effluent was collected for the first ten minutes of reperfusion in order to measure MMP2 activity by gelatin zymography. Results Dietary RPO-supplementation decreased myocardial infarct size significantly when compared to the SRC-group and the SFO-supplemented group (9.1 ± 1.0% versus 30.2 ± 3.9% and 27.1 ± 2.4% respectively. Both dietary RPO- and SFO-supplementation were able to decrease MMP2 activity when compared to the SRC fed group. PKB/Akt phosphorylation (Thr 308 was found to be significantly higher in the dietary RPO supplemented group when compared to the SFO supplemented group at 10 minutes into reperfusion. There was, however, no significant changes observed in ERK phosphorylation. Conclusions Dietary RPO-supplementation was found to be more effective than SFO-supplementation in reducing myocardial infarct size after ischaemia/reperfusion injury. Both dietary RPO and SFO were able to reduce MMP2 activity, which suggests that MMP2 activity does not play a major role in

  16. Sample size and power for a stratified doubly randomized preference design.

    Science.gov (United States)

    Cameron, Briana; Esserman, Denise A

    2016-11-21

    The two-stage (or doubly) randomized preference trial design is an important tool for researchers seeking to disentangle the role of patient treatment preference on treatment response through estimation of selection and preference effects. Up until now, these designs have been limited by their assumption of equal preference rates and effect sizes across the entire study population. We propose a stratified two-stage randomized trial design that addresses this limitation. We begin by deriving stratified test statistics for the treatment, preference, and selection effects. Next, we develop a sample size formula for the number of patients required to detect each effect. The properties of the model and the efficiency of the design are established using a series of simulation studies. We demonstrate the applicability of the design using a study of Hepatitis C treatment modality, specialty clinic versus mobile medical clinic. In this example, a stratified preference design (stratified by alcohol/drug use) may more closely capture the true distribution of patient preferences and allow for a more efficient design than a design which ignores these differences (unstratified version). © The Author(s) 2016.

  17. Sediment grain size estimation using airborne remote sensing, field sampling, and robust statistic.

    Science.gov (United States)

    Castillo, Elena; Pereda, Raúl; Luis, Julio Manuel de; Medina, Raúl; Viguri, Javier

    2011-10-01

    Remote sensing has been used since the 1980s to study parameters in relation with coastal zones. It was not until the beginning of the twenty-first century that it started to acquire imagery with good temporal and spectral resolution. This has encouraged the development of reliable imagery acquisition systems that consider remote sensing as a water management tool. Nevertheless, the spatial resolution that it provides is not adapted to carry out coastal studies. This article introduces a new methodology for estimating the most fundamental physical property of intertidal sediment, the grain size, in coastal zones. The study combines hyperspectral information (CASI-2 flight), robust statistic, and simultaneous field work (chemical and radiometric sampling), performed over Santander Bay, Spain. Field data acquisition was used to build a spectral library in order to study different atmospheric correction algorithms for CASI-2 data and to develop algorithms to estimate grain size in an estuary. Two robust estimation techniques (MVE and MCD multivariate M-estimators of location and scale) were applied to CASI-2 imagery, and the results showed that robust adjustments give acceptable and meaningful algorithms. These adjustments have given the following R(2) estimated results: 0.93 in the case of sandy loam contribution, 0.94 for the silty loam, and 0.67 for clay loam. The robust statistic is a powerful tool for large dataset.

  18. Methotrexate carried in lipid core nanoparticles reduces myocardial infarction size and improves cardiac function in rats

    Science.gov (United States)

    Maranhão, Raul C; Guido, Maria C; de Lima, Aline D; Tavares, Elaine R; Marques, Alyne F; Tavares de Melo, Marcelo D; Nicolau, Jose C; Salemi, Vera MC; Kalil-Filho, Roberto

    2017-01-01

    Purpose Acute myocardial infarction (MI) is accompanied by myocardial inflammation, fibrosis, and ventricular remodeling that, when excessive or not properly regulated, may lead to heart failure. Previously, lipid core nanoparticles (LDE) used as carriers of the anti-inflammatory drug methotrexate (MTX) produced an 80-fold increase in the cell uptake of MTX. LDE-MTX treatment reduced vessel inflammation and atheromatous lesions induced in rabbits by cholesterol feeding. The aim of the study was to investigate the effects of LDE-MTX on rats with MI, compared with commercial MTX treatment. Materials and methods Thirty-eight Wistar rats underwent left coronary artery ligation and were treated with LDE-MTX, or with MTX (1 mg/kg intraperitoneally, once/week, starting 24 hours after surgery) or with LDE without drug (MI-controls). A sham-surgery group (n=12) was also included. Echocardiography was performed 24 hours and 6 weeks after surgery. The animals were euthanized and their hearts were analyzed for morphometry, protein expression, and confocal microscopy. Results LDE-MTX treatment achieved a 40% improvement in left ventricular (LV) systolic function and reduced cardiac dilation and LV mass, as shown by echocardiography. LDE-MTX reduced the infarction size, myocyte hypertrophy and necrosis, number of inflammatory cells, and myocardial fibrosis, as shown by morphometric analysis. LDE-MTX increased antioxidant enzymes; decreased apoptosis, macrophages, reactive oxygen species production; and tissue hypoxia in non-infarcted myocardium. LDE-MTX increased adenosine bioavailability in the LV by increasing adenosine receptors and modulating adenosine catabolic enzymes. LDE-MTX increased the expression of myocardial vascular endothelium growth factor (VEGF) associated with adenosine release; this correlated not only with an increase in angiogenesis, but also with other parameters improved by LDE-MTX, suggesting that VEGF increase played an important role in the beneficial

  19. Performances of Different Fragment Sizes for Reduced Representation Bisulfite Sequencing in Pigs

    DEFF Research Database (Denmark)

    Yuan, Xiao Long; Zhang, Zhe; Pan, Rong Yang

    2017-01-01

    sizes might decrease when the dataset size was more than 70, 50 and 110 million reads for these three fragment sizes, respectively. Given a 50-million dataset size, the average sequencing depth of the detected CpG sites in the 110-220 bp fragment size appeared to be deeper than in the 40-110 bp and 40...

  20. Reduction of sample size requirements by bilateral versus unilateral research designs in animal models for cartilage tissue engineering.

    Science.gov (United States)

    Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali; Madry, Henning

    2013-11-01

    Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering.

  1. Sample size calculation based on exact test for assessing differential expression analysis in RNA-seq data.

    Science.gov (United States)

    Li, Chung-I; Su, Pei-Fang; Shyr, Yu

    2013-12-06

    Sample size calculation is an important issue in the experimental design of biomedical research. For RNA-seq experiments, the sample size calculation method based on the Poisson model has been proposed; however, when there are biological replicates, RNA-seq data could exhibit variation significantly greater than the mean (i.e. over-dispersion). The Poisson model cannot appropriately model the over-dispersion, and in such cases, the negative binomial model has been used as a natural extension of the Poisson model. Because the field currently lacks a sample size calculation method based on the negative binomial model for assessing differential expression analysis of RNA-seq data, we propose a method to calculate the sample size. We propose a sample size calculation method based on the exact test for assessing differential expression analysis of RNA-seq data. The proposed sample size calculation method is straightforward and not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size method are presented; the results indicate our method works well, with achievement of desired power.

  2. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  3. Simplified microenvironments and reduced cell culture size influence the cell differentiation outcome in cellular microarrays.

    Science.gov (United States)

    Rodríguez-Seguí, Santiago A; Ortuño, María José; Ventura, Francesc; Martínez, Elena; Samitier, Josep

    2013-01-01

    Cellular microarrays present a promising tool for multiplex evaluation of the signalling effect of substrate-immobilized factors on cellular differentiation. In this paper, we compare the early myoblast-to-osteoblast cell commitment steps in response to a growth factor stimulus using standard well plate differentiation assays or cellular microarrays. Our results show that restraints on the cell culture size, inherent to cellular microarrays, impair the differentiation outcome. Also, while cells growing on spots with immobilised BMP-2 are early biased towards the osteoblast fate, longer periods of cell culturing in the microarrays result in cell proliferation and blockage of osteoblast differentiation. The results presented here raise concerns about the efficiency of cell differentiation when the cell culture dimensions are reduced to a simplified microspot environment. Also, these results suggest that further efforts should be devoted to increasing the complexity of the microspots composition, aiming to replace signalling cues missing in this system.

  4. SAMPLE SIZE DETERMINATION IN CLINICAL TRIALS BASED ON APPROXIMATION OF VARIANCE ESTIMATED FROM LIMITED PRIMARY OR PILOT STUDIES

    Directory of Open Access Journals (Sweden)

    B SOLEYMANI

    2001-06-01

    Full Text Available In many casses the estimation of variance which is used to determine sample size in clinical trials, derives from limited primary or pilot studies in which number of samples is small. since in such casses the estimation of variance may be much far from the real variance, the size of samples is suspected to be less or more than what is really needed. In this article an attempt has been made to give a solution to this problem. in the case of normal distribution. Based on distribution of (n-1 S2/?2 which is chi-square for normal variables, an appropriate estimation of variance is determined an used to calculate sample size. Also, total probability to ensure specific precision and power has been achived. In method presented here, The probability for getting desired precision and power is more than that of usual method, but results of two methods get closer when sample size increases in primary studies.

  5. Cleavage by RNase P of gene N mRNA reduces bacteriophage lambda burst size.

    Science.gov (United States)

    Li, Y; Altman, S

    1996-03-01

    RNase P, an enzyme essential for tRNA biosynthesis, can be directed to cleave any RNA when the target RNA is in a complex with a short, complementary oligonucleotide called an external guide sequence (EGS). RNase P from Escherichia coli can cleave phage lambda N mRNA in vitro or in vivo when the mRNA is in a complex with an EGS. The EGS can either be separate from or covalently linked to M1 RNA, the catalytic RNA subunit of RNase P. The requirement for Mg2+ in the reaction in vitro is lower when the EGS is covalently linked to M1 RNA. Substrates made of DNA can also be cleaved by RNase P in vitro in complexes with RNA EGSs. When either kind of EGS construct is used in vivo, burst size of phage lambda is reduced by > or = 40%. Reduction in burst size depends on efficient expression of the EGS constructs. The product of phage lambda gene N appears to function in a stoichiometric fashion.

  6. Effect of the distribution of analyte concentration in lot, sample size, and number of analytical runs on food-testing results.

    Science.gov (United States)

    Watanabe, Takahiro; Matsuda, Rieko

    2012-10-24

    In testing, it is necessary to obtain the correct measured values that reflect analyte concentrations in the lot. Control of the analytical performance and appropriate sampling are essential to obtain the correct values. In the present study, we estimated the distribution of the analyte concentrations in specific food product lots and examined the influence of the sample size and the number of analytical runs on the variability of the testing results. The combinations of analyte and food studied were pesticide residues in fresh vegetables, nitrate in fresh vegetables, and food additives in processed meat products. The results of our study suggested the following: an increase in the sample size beyond a certain number does not efficiently reduce the variability of the test results; the specific sample size required to maintain the variability of the testing results at an appropriate level depends on the breadth of distribution of concentrations in the lot and the precision of the analysis; and increasing the number of analytical runs was more efficient in reducing the variability of the testing results than increasing the sample size, when the breadth of distribution of concentrations in the lot was narrow enough to be comparable with the analytical precision.

  7. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

    Directory of Open Access Journals (Sweden)

    You-xin Shen

    Full Text Available A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1 Does this conventional sampling strategy limit the detection of seeds of woody species? 2 Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length × 10 cm (width × 10 cm (depth, referred to as larger number of small-sized samples (LNSS in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS and placed them (10 each in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.

  8. Restaurant owners' perspectives on a voluntary program to recognize restaurants for offering reduced-size portions, Los Angeles County, 2012.

    Science.gov (United States)

    Gase, Lauren; Dunning, Lauren; Kuo, Tony; Simon, Paul; Fielding, Jonathan E

    2014-03-20

    Reducing the portion size of food and beverages served at restaurants has emerged as a strategy for addressing the obesity epidemic; however, barriers and facilitators to achieving this goal are not well characterized. In fall 2012, the Los Angeles County Department of Public Health conducted semistructured interviews with restaurant owners to better understand contextual factors that may impede or facilitate participation in a voluntary program to recognize restaurants for offering reduced-size portions. Interviews were completed with 18 restaurant owners (representing nearly 350 restaurants). Analyses of qualitative data revealed 6 themes related to portion size: 1) perceived customer demand is central to menu planning; 2) multiple portion sizes are already being offered for at least some food items; 3) numerous logistical barriers exist for offering reduced-size portions; 4) restaurant owners have concerns about potential revenue losses from offering reduced-size portions; 5) healthful eating is the responsibility of the customer; and 6) a few owners want to be socially responsible industry leaders. A program to recognize restaurants for offering reduced-size portions may be a feasible approach in Los Angeles County. These findings may have applications for jurisdictions interested in engaging restaurants as partners in reducing the obesity epidemic.

  9. Restaurant Owners’ Perspectives on a Voluntary Program to Recognize Restaurants for Offering Reduced-Size Portions, Los Angeles County, 2012

    Science.gov (United States)

    Dunning, Lauren; Kuo, Tony; Simon, Paul; Fielding, Jonathan E.

    2014-01-01

    Introduction Reducing the portion size of food and beverages served at restaurants has emerged as a strategy for addressing the obesity epidemic; however, barriers and facilitators to achieving this goal are not well characterized. Methods In fall 2012, the Los Angeles County Department of Public Health conducted semistructured interviews with restaurant owners to better understand contextual factors that may impede or facilitate participation in a voluntary program to recognize restaurants for offering reduced-size portions. Results Interviews were completed with 18 restaurant owners (representing nearly 350 restaurants). Analyses of qualitative data revealed 6 themes related to portion size: 1) perceived customer demand is central to menu planning; 2) multiple portion sizes are already being offered for at least some food items; 3) numerous logistical barriers exist for offering reduced-size portions; 4) restaurant owners have concerns about potential revenue losses from offering reduced-size portions; 5) healthful eating is the responsibility of the customer; and 6) a few owners want to be socially responsible industry leaders. Conclusion A program to recognize restaurants for offering reduced-size portions may be a feasible approach in Los Angeles County. These findings may have applications for jurisdictions interested in engaging restaurants as partners in reducing the obesity epidemic. PMID:24650622

  10. Method And Apparatus For Reducing Sample Dispersion In Turns And Junctions Of Micro-Channel Systems

    Science.gov (United States)

    Griffiths, Stewart K. , Nilson, Robert H.

    2004-05-11

    What is disclosed pertains to improvement in the performance of microchannel devices by providing turns, wyes, tees, and other junctions that produce little dispersion of a sample as it traverses the turn or junction. The reduced dispersion results from contraction and expansion regions that reduce the cross-sectional area over some portion of the turn or junction. By carefully designing the geometries of these regions, sample dispersion in turns and junctions is reduced to levels comparable to the effects of ordinary diffusion. The low dispersion features are particularly suited for microfluidic devices and systems using either electromotive force, pressure, or combinations thereof as the principle of fluid transport. Such microfluidic devices and systems are useful for separation of components, sample transport, reaction, mixing, dilution or synthesis, or combinations thereof.

  11. Bayesian adaptive determination of the sample size required to assure acceptably low adverse event risk.

    Science.gov (United States)

    Lawrence Gould, A; Zhang, Xiaohua Douglas

    2014-03-15

    An emerging concern with new therapeutic agents, especially treatments for type 2 diabetes, a prevalent condition that increases an individual's risk of heart attack or stroke, is the likelihood of adverse events, especially cardiovascular events, that the new agents may cause. These concerns have led to regulatory requirements for demonstrating that a new agent increases the risk of an adverse event relative to a control by no more than, say, 30% or 80% with high (e.g., 97.5%) confidence. We describe a Bayesian adaptive procedure for determining if the sample size for a development program needs to be increased and, if necessary, by how much, to provide the required assurance of limited risk. The decision is based on the predictive likelihood of a sufficiently high posterior probability that the relative risk is no more than a specified bound. Allowance can be made for between-center as well as within-center variability to accommodate large-scale developmental programs, and design alternatives (e.g., many small centers, few large centers) for obtaining additional data if needed can be explored. Binomial or Poisson likelihoods can be used, and center-level covariates can be accommodated. The predictive likelihoods are explored under various conditions to assess the statistical properties of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Sampling design and required sample size for evaluating contamination levels of 137Cs in Japanese fir needles in a mixed deciduous forest stand in Fukushima, Japan.

    Science.gov (United States)

    Oba, Yurika; Yamada, Toshihiro

    2017-05-01

    We estimated the sample size (the number of samples) required to evaluate the concentration of radiocesium (137Cs) in Japanese fir (Abies firma Sieb. & Zucc.), 5 years after the outbreak of the Fukushima Daiichi Nuclear Power Plant accident. We investigated the spatial structure of the contamination levels in this species growing in a mixed deciduous broadleaf and evergreen coniferous forest stand. We sampled 40 saplings with a tree height of 150 cm-250 cm in a Fukushima forest community. The results showed that: (1) there was no correlation between the 137Cs concentration in needles and soil, and (2) the difference in the spatial distribution pattern of 137Cs concentration between needles and soil suggest that the contribution of root uptake to 137Cs in new needles of this species may be minor in the 5 years after the radionuclides were released into the atmosphere. The concentration of 137Cs in needles showed a strong positive spatial autocorrelation in the distance class from 0 to 2.5 m, suggesting that the statistical analysis of data should consider spatial autocorrelation in the case of an assessment of the radioactive contamination of forest trees. According to our sample size analysis, a sample size of seven trees was required to determine the mean contamination level within an error in the means of no more than 10%. This required sample size may be feasible for most sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Chronic Metformin Treatment is Associated with Reduced Myocardial Infarct Size in Diabetic Patients with ST-segment Elevation Myocardial Infarction

    NARCIS (Netherlands)

    Lexis, Chris P. H.; Wieringa, Wouter G.; Hiemstra, Bart; van Deursen, Vincent M.; Lipsic, Erik; van der Harst, Pim; van Veldhuisen, Dirk J.; van der Horst, Iwan C. C.

    Increased myocardial infarct (MI) size is associated with higher risk of developing left ventricular dysfunction, heart failure and mortality. Experimental studies have suggested that metformin treatment reduces MI size after induced ischaemia but human data is lacking. We aimed to investigate the

  14. Estimating everyday portion size using a 'method of constant stimuli': in a student sample, portion size is predicted by gender, dietary behaviour, and hunger, but not BMI.

    Science.gov (United States)

    Brunstrom, Jeffrey M; Rogers, Peter J; Pothos, Emmanuel M; Calitri, Raff; Tapper, Katy

    2008-09-01

    This paper (i) explores the proposition that body weight is associated with large portion sizes and (ii) introduces a new technique for measuring everyday portion size. In our paradigm, the participant is shown a picture of a food portion and is asked to indicate whether it is larger or smaller than their usual portion. After responding to a range of different portions an estimate of everyday portion size is calculated using probit analysis. Importantly, this estimate is likely to be robust because it is based on many responses. First-year undergraduate students (N=151) completed our procedure for 12 commonly consumed foods. As expected, portion sizes were predicted by gender and by a measure of dieting and dietary restraint. Furthermore, consistent with reports of hungry supermarket shoppers, portion-size estimates tended to be higher in hungry individuals. However, we found no evidence for a relationship between BMI and portion size in any of the test foods. We consider reasons why this finding should be anticipated. In particular, we suggest that the difference in total energy expenditure of individuals with a higher and lower BMI is too small to be detected as a concomitant difference in portion size (at least in our sample).

  15. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages.

    Science.gov (United States)

    Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S

    2016-01-01

    In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  16. Losartan activates sirtuin 1 in rat reduced-size orthotopic liver transplantation.

    Science.gov (United States)

    Pantazi, Eirini; Bejaoui, Mohamed; Zaouali, Mohamed Amine; Folch-Puy, Emma; Pinto Rolo, Anabela; Panisello, Arnau; Palmeira, Carlos Marques; Roselló-Catafau, Joan

    2015-07-14

    To investigate a possible association between losartan and sirtuin 1 (SIRT1) in reduced-size orthotopic liver transplantation (ROLT) in rats. Livers of male Sprague-Dawley rats (200-250 g) were preserved in University of Wisconsin preservation solution for 1 h at 4 °C prior to ROLT. In an additional group, an antagonist of angiotensin II type 1 receptor (AT1R), losartan, was orally administered (5 mg/kg) 24 h and 1 h before the surgical procedure to both the donors and the recipients. Transaminase (as an indicator of liver injury), SIRT1 activity, and nicotinamide adenine dinucleotide (NAD(+), a co-factor necessary for SIRT1 activity) levels were determined by biochemical methods. Protein expression of SIRT1, acetylated FoxO1 (ac-FoxO1), NAMPT (the precursor of NAD+), heat shock proteins (HSP70, HO-1) expression, endoplasmic reticulum stress (GRP78, IRE1α, p-eIF2) and apoptosis (caspase 12 and caspase 3) parameters were determined by Western blot. Possible alterations in protein expression of mitogen activated protein kinases (MAPK), such as p-p38 and p-ERK, were also evaluated. Furthermore, the SIRT3 protein expression and mRNA levels were examined. The present study demonstrated that losartan administration led to diminished liver injury when compared to ROLT group, as evidenced by the significant decreases in alanine aminotransferase (358.3 ± 133.44 vs 206 ± 33.61, P losartan was associated with enhanced SIRT1 protein expression and activity (5.27 ± 0.32 vs 6.08 ± 0.30, P Losartan treatment also provoked significant attenuation of endoplasmic reticulum stress parameters (GRP78, IRE1α, p-eIF2) which was consistent with reduced levels of both caspase 12 and caspase 3. Furthermore, losartan administration stimulated HSP70 protein expression and attenuated HO-1 expression. However, no changes were observed in protein or mRNA expression of SIRT3. Finally, the protein expression pattern of p-ERK and p-p38 were not altered upon losartan administration. The

  17. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    The proportionator is a novel and radically different approach to sampling with microscopes based on well-known statistical theory (probability proportional to size - PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section a ...

  18. New approach to purging monitoring wells: Lower flow rates reduce required purging volumes and sample turbidity

    Energy Technology Data Exchange (ETDEWEB)

    Puls, R.W.

    1994-01-01

    It is generally accepted that monitoring wells must be purged to access formation water to obtain representative' ground water quality samples. Historically anywhere from 3 to 5 well casing volumes have been removed prior to sample collection to evacuate the standing well water and access the adjacent formation water. However, a common result of such purging practice is highly turbid samples from excessive downhole disturbance to the sampling zone. An alternative purging strategy has been proposed using pumps which permit much lower flow rates (<1 liter/min) and placement within the screened interval of the monitoring well. The advantages of this approach include increased spatial resolution of sampling points, less variability, less purge time (and volume), and low-turbidity samples. The overall objective is a more passive approach to sample extraction with the ideal approach being to match the intake velocity with the natural ground water flow velocity. The volume of water extracted to access formation water is generally independent of well size and capacity and dependant upon well construction, development, hydrogeologic variability and pump flow rate.

  19. Field sampling of loose erodible material: A new system to consider the full particle-size spectrum

    Science.gov (United States)

    Klose, Martina; Gill, Thomas E.; Webb, Nicholas P.; Van Zee, Justin W.

    2017-10-01

    A new system is presented to sample and enable the characterization of loose erodible material (LEM) present on a soil surface, which may be susceptible for entrainment by wind. The system uses a modified MWAC (Modified Wilson and Cooke) sediment sampler connected to a corded hand-held vacuum cleaner. Performance and accuracy of the system was tested in the laboratory using five reference soil samples with different textures. Sampling was most effective for sandy soils, while effectiveness decreases were found for soils with high silt and clay contents in dry dispersion. This effectiveness decrease can be attributed to loose silt and clay-sized particles and particle aggregates adhering to and clogging a filter attached to the MWAC outlet. Overall, the system was found to be effective in collecting sediment for most soil textures and theoretical interpretation of the measured flow speeds suggests that LEM can be sampled for a wide range of particle sizes, including dust particles. Particle-size analysis revealed that the new system is able to accurately capture the particle-size distribution (PSD) of a given sample. Only small discrepancies (maximum cumulative difference vacuuming for all test soils. Despite limitations of the system, it is an advance toward sampling the full particle-size spectrum of loose sediment available for entrainment with the overall goal to better understand the mechanisms of dust emission and their variability.

  20. Rates of brain atrophy and clinical decline over 6 and 12-month intervals in PSP: determining sample size for treatment trials.

    Science.gov (United States)

    Whitwell, Jennifer L; Xu, Jia; Mandrekar, Jay N; Gunter, Jeffrey L; Jack, Clifford R; Josephs, Keith A

    2012-03-01

    Imaging biomarkers are useful outcome measures in treatment trials. We compared sample size estimates for future treatment trials performed over 6 or 12-months in progressive supranuclear palsy using both imaging and clinical measures. We recruited 16 probable progressive supranuclear palsy patients that underwent baseline, 6 and 12-month brain scans, and 16 age-matched controls with serial scans. Disease severity was measured at each time-point using the progressive supranuclear palsy rating scale. Rates of ventricular expansion and rates of atrophy of the whole brain, superior frontal lobe, thalamus, caudate and midbrain were calculated. Rates of atrophy and clinical decline were used to calculate sample sizes required to power placebo-controlled treatment trials over 6 and 12-months. Rates of whole brain, thalamus and midbrain atrophy, and ventricular expansion, were increased over 6 and 12-months in progressive supranuclear palsy compared to controls. The progressive supranuclear palsy rating scale increased by 9 points over 6-months, and 18 points over 12-months. The smallest sample size estimates for treatment trials over 6-months were achieved using rate of midbrain atrophy, followed by rate of whole brain atrophy and ventricular expansion. Sample size estimates were further reduced over 12-month intervals. Sample size estimates for the progressive supranuclear palsy rating scale were worse than imaging measures over 6-months, but comparable over 12-months. Atrophy and clinical decline can be detected over 6-months in progressive supranuclear palsy. Sample size estimates suggest that treatment trials could be performed over this interval, with rate of midbrain atrophy providing the best outcome measure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Method and apparatus for reducing sample dispersion in turns and junctions of microchannel systems

    Science.gov (United States)

    Griffiths, Stewart K.; Nilson, Robert H.

    2001-01-01

    The performance of microchannel devices is improved by providing turns, wyes, tees, and other junctions that produce little dispersions of a sample as it traverses the turn or junction. The reduced dispersion results from contraction and expansion regions that reduce the cross-sectional area over some portion of the turn or junction. By carefully designing the geometries of these regions, sample dispersion in turns and junctions is reduced to levels comparable to the effects of ordinary diffusion. A numerical algorithm was employed to evolve low-dispersion geometries by computing the electric or pressure field within candidate configurations, sample transport through the turn or junction, and the overall effective dispersion. These devices should greatly increase flexibility in the design of microchannel devices by permitting the use of turns and junctions that do not induce large sample dispersion. In particular, the ability to fold electrophoretic and electrochrornatographic separation columns will allow dramatic improvements in the miniaturization of these devices. The low-lispersion devices are particularly suited to electrochromatographic and electrophoretic separations, as well as pressure-driven chromatographic separation. They are further applicable to microfluidic systems employing either electroosrnotic or pressure-driven flows for sample transport, reaction, mixing, dilution or synthesis.

  2. Intra- and trans-generational costs of reduced female body size caused by food limitation early in life in mites.

    Directory of Open Access Journals (Sweden)

    Andreas Walzer

    Full Text Available Food limitation early in life may be compensated for by developmental plasticity resulting in accelerated development enhancing survival at the expense of small adult body size. However and especially for females in non-matching maternal and offspring environments, being smaller than the standard may incur considerable intra- and trans-generational costs.Here, we evaluated the costs of small female body size induced by food limitation early in life in the sexually size-dimorphic predatory mite Phytoseiulus persimilis. Females are larger than males. These predators are adapted to exploit ephemeral spider mite prey patches. The intra- and trans-generational effects of small maternal body size manifested in lower maternal survival probabilities, decreased attractiveness for males, and a reduced number and size of eggs compared to standard-sized females. The trans-generational effects of small maternal body size were sex-specific with small mothers producing small daughters but standard-sized sons.Small female body size apparently intensified the well-known costs of sexual activity because mortality of small but not standard-sized females mainly occurred shortly after mating. The disadvantages of small females in mating and egg production may be generally explained by size-associated morphological and physiological constraints. Additionally, size-assortative mate preferences of standard-sized mates may have rendered small females disproportionally unattractive mating partners. We argue that the sex-specific trans-generational effects were due to sexual size dimorphism - females are the larger sex and thus more strongly affected by maternal stress than the smaller males - and to sexually selected lower plasticity of male body size.

  3. Grain size of loess and paleosol samples: what are we measuring?

    Science.gov (United States)

    Varga, György; Kovács, János; Szalai, Zoltán; Újvári, Gábor

    2017-04-01

    Particle size falling into a particularly narrow range is among the most important properties of windblown mineral dust deposits. Therefore, various aspects of aeolian sedimentation and post-depositional alterations can be reconstructed only from precise grain size data. Present study is aimed at (1) reviewing grain size data obtained from different measurements, (2) discussing the major reasons for disagreements between data obtained by frequently applied particle sizing techniques, and (3) assesses the importance of particle shape in particle sizing. Grain size data of terrestrial aeolian dust deposits (loess and paleosoil) were determined by laser scattering instruments (Fritsch Analysette 22 Microtec Plus, Horiba Partica La-950 v2 and Malvern Mastersizer 3000 with a Hydro Lv unit), while particles size and shape distributions were acquired by Malvern Morphologi G3-ID. Laser scattering results reveal that the optical parameter settings of the measurements have significant effects on the grain size distributions, especially for the fine-grained fractions (camera. However, this is only one outcome of infinite possible projections of a three-dimensional object and it cannot be regarded as a representative one. The third (height) dimension of the particles remains unknown, so the volume-based weightings are fairly dubious in the case of platy particles. Support of the National Research, Development and Innovation Office (Hungary) under contract NKFI 120620 is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.

  4. Applying Individual Tree Structure From Lidar to Address the Sensitivity of Allometric Equations to Small Sample Sizes.

    Science.gov (United States)

    Duncanson, L.; Dubayah, R.

    2015-12-01

    Lidar remote sensing is widely applied for mapping forest carbon stocks, and technological advances have improved our ability to capture structural details from forests, even resolving individual trees. Despite these advancements, the accuracy of forest aboveground biomass models remains limited by the quality of field estimates of biomass. The accuracies of field estimates are inherently dependent on the accuracy of the allometric equations used to relate measurable attributes to biomass. These equations are calibrated with relatively small samples of often spatially clustered trees. This research focuses on one of many issues involving allometric equations - understanding how sensitive allometric parameters are to the sample sizes used to fit them. We capitalize on recent advances in lidar remote sensing to extract individual tree structural information from six high-resolution airborne lidar datasets in the United States. We remotely measure millions of tree heights and crown radii, and fit allometric equations to the relationship between tree height and radius at a 'population' level, in each site. We then extract samples from our tree database, and build allometries on these smaller samples of trees, with varying sample sizes. We show that for the allometric relationship between tree height and crown radius, small sample sizes produce biased allometric equations that overestimate height for a given crown radius. We extend this analysis using translations from the literature to address potential implications for biomass, showing that site-level biomass may be greatly overestimated when applying allometric equations developed with the typically small sample sizes used in popular allometric equations for biomass.

  5. Is more research always needed? Estimating optimal sample sizes for trials of retention in care interventions for HIV-positive East Africans.

    Science.gov (United States)

    Uyei, Jennifer; Li, Lingfeng; Braithwaite, R Scott

    2017-01-01

    Given the serious health consequences of discontinuing antiretroviral therapy, randomised control trials of interventions to improve retention in care may be warranted. As funding for global HIV research is finite, it may be argued that choices about sample size should be tied to maximising health. For an East African setting, we calculated expected value of sample information and expected net benefit of sampling to identify the optimal sample size (greatest return on investment) and to quantify net health gains associated with research. Two hypothetical interventions were analysed: (1) one aimed at reducing disengagement from HIV care and (2) another aimed at finding/relinking disengaged patients. When the willingness to pay (WTP) threshold was within a plausible range (1-3 × GDP; US$1377-4130/QALY), the optimal sample size was zero for both interventions, meaning that no further research was recommended because the pre-research probability of an intervention's effectiveness and value was sufficient to support a decision on whether to adopt the intervention and any new information gained from additional research would likely not change that decision. In threshold analyses, at a higher WTP of $5200 the optimal sample size for testing a risk reduction intervention was 2750 per arm. For the outreach intervention, the optimal sample size remained zero across a wide range of WTP thresholds and was insensitive to variation. Limitations, including not varying all inputs in the model, may have led to an underestimation of the value of investing in new research. In summary, more research is not always needed, particularly when there is moderately robust prestudy belief about intervention effectiveness and little uncertainty about the value (cost-effectiveness) of the intervention. Users can test their own assumptions at http://torchresearch.org.

  6. Invasive surgery reduces infarct size and preserves cardiac function in a porcine model of myocardial infarction.

    Science.gov (United States)

    van Hout, Gerardus P J; Teuben, Michel P J; Heeres, Marjolein; de Maat, Steven; de Jong, Renate; Maas, Coen; Kouwenberg, Lisanne H J A; Koenderman, Leo; van Solinge, Wouter W; de Jager, Saskia C A; Pasterkamp, Gerard; Hoefer, Imo E

    2015-11-01

    Reperfusion injury following myocardial infarction (MI) increases infarct size (IS) and deteriorates cardiac function. Cardioprotective strategies in large animal MI models often failed in clinical trials, suggesting translational failure. Experimentally, MI is induced artificially and the effect of the experimental procedures may influence outcome and thus clinical applicability. The aim of this study was to investigate if invasive surgery, as in the common open chest MI model affects IS and cardiac function. Twenty female landrace pigs were subjected to MI by transluminal balloon occlusion. In 10 of 20 pigs, balloon occlusion was preceded by invasive surgery (medial sternotomy). After 72 hrs, pigs were subjected to echocardiography and Evans blue/triphenyl tetrazoliumchloride double staining to determine IS and area at risk. Quantification of IS showed a significant IS reduction in the open chest group compared to the closed chest group (IS versus area at risk: 50.9 ± 5.4% versus 69.9 ± 3.4%, P = 0.007). End systolic LV volume and LV ejection fraction measured by echocardiography at follow-up differed significantly between both groups (51 ± 5 ml versus 65 ± 3 ml, P = 0.033; 47.5 ± 2.6% versus 38.8 ± 1.2%, P = 0.005). The inflammatory response in the damaged myocardium did not differ between groups. This study indicates that invasive surgery reduces IS and preserves cardiac function in a porcine MI model. Future studies need to elucidate the effect of infarct induction technique on the efficacy of pharmacological therapies in large animal cardioprotection studies. © 2015 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.

  7. Effects of the sample size of reference population on determining BMD reference curve and peak BMD and diagnosing osteoporosis.

    Science.gov (United States)

    Hou, Y-L; Liao, E-Y; Wu, X-P; Peng, Y-Q; Zhang, H; Dai, R-C; Luo, X-H; Cao, X-Z

    2008-01-01

    Establishing reference databases generally requires a large sample size to achieve reliable results. Our study revealed that the varying sample size from hundreds to thousands of individuals has no decisive effect on the bone mineral density (BMD) reference curve, peak BMD, and diagnosing osteoporosis. It provides a reference point for determining the sample size while establishing local BMD reference databases. This study attempts to determine a suitable sample size for establishing bone mineral density (BMD) reference databases in a local laboratory. The total reference population consisted of 3,662 Chinese females aged 6-85 years. BMDs were measured with a dual-energy X-ray absorptiometry densitometer. The subjects were randomly divided into four different sample groups, that is, total number (Tn) = 3,662, 1/2n = 1,831, 1/4n = 916, and 1/8n = 458. We used the best regression model to determine BMD reference curve and peak BMD. There was no significant difference in the full curves between the four sample groups at each skeletal site, although some discrepancy at the end of the curves was observed at the spine. Peak BMDs were very similar in the four sample groups. According to the Chinese diagnostic criteria (BMD >25% below the peak BMD as osteoporosis), no difference was observed in the osteoporosis detection rate using the reference values determined by the four different sample groups. Varying the sample size from hundreds to thousands has no decisive effect on establishing BMD reference curve and determining peak BMD. It should be practical for determining the reference population while establishing local BMD databases.

  8. Reduced electron exposure for energy-dispersive spectroscopy using dynamic sampling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yan; Godaliyadda, G. M. Dilshan; Ferrier, Nicola; Gulsoy, Emine B.; Bouman, Charles A.; Phatak, Charudatta

    2018-01-01

    Analytical electron microscopy and spectroscopy of biological specimens, polymers, and other beam sensitive materials has been a challenging area due to irradiation damage. There is a pressing need to develop novel imaging and spectroscopic imaging methods that will minimize such sample damage as well as reduce the data acquisition time. The latter is useful for high-throughput analysis of materials structure and chemistry. In this work, we present a novel machine learning based method for dynamic sparse sampling of EDS data using a scanning electron microscope. Our method, based on the supervised learning approach for dynamic sampling algorithm and neural networks based classification of EDS data, allows a dramatic reduction in the total sampling of up to 90%, while maintaining the fidelity of the reconstructed elemental maps and spectroscopic data. We believe this approach will enable imaging and elemental mapping of materials that would otherwise be inaccessible to these analysis techniques.

  9. Automated Gel Size Selection to Improve the Quality of Next-generation Sequencing Libraries Prepared from Environmental Water Samples.

    Science.gov (United States)

    Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick

    2015-04-17

    Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.

  10. Influence of reducing agents and surfactants on size and shape of silver fine powder particles

    Directory of Open Access Journals (Sweden)

    Stevan P. Dimitrijević

    2014-07-01

    Full Text Available Silver fine powder with different shapes and sizes were prepared by chemical reduction and characterized by scanning electron microscope. In this paper was presented the method for the preparation of the fine Ag powder with particles size smaller than 2.5 µm with suitability for the mass-production scale. Reduction was performed from nitrate solution directly by vigorous stirring at room temperature by three different reduction agents, with and without presence of two dispersants. Scanning electron microscopy revealed the preferred size of the particles obtained in all experiments with aim of the protecting agent. Larger particles and wider size distribution were obtained without surfactants although with average size of about 1 µm and small quantity of larger clusters of primary particles that is out of the fine powder classification. High purity, 99.999%, of silver was obtained in every experiment.

  11. Successful Transplantation of Reduced Sized Rat Alcoholic Fatty Livers Made Possible by Mobilization of Host Stem Cells

    Science.gov (United States)

    Hisada, Masayuki; Ota, Yoshihiro; Zhang, Xiuying; Cameron, Andrew M; Gao, Bin; Montgomery, Robert A; Williams, George Melville; Sun, Zhaoli

    2015-01-01

    Livers from Lewis rats fed with 7% alcohol for 5 weeks were used for transplantation. Reduced sized (50%) livers or whole livers were transplanted into normal DA recipients, which, in this strain combination, survive indefinitely when the donor has not been fed alcohol. However, none of the rats survived a whole fatty liver transplant while six of seven recipients of reduced sized alcoholic liver grafts survived long term. SDF-1 and HGF were significantly increased in reduced size liver grafts compared to whole liver grafts. Lineage-negative Thy-1+CXCR4+CD133+ stem cells were significantly increased in the peripheral blood and in allografts after reduced size fatty liver transplantation. In contrast, there were meager increases in cells reactive with anti Thy-1, CXCR4 and CD133 in peripheral blood and allografts in whole alcoholic liver recipients. The provision of plerixafor, a stem cell mobilizer, salvaged 5 of 10 whole fatty liver grafts. Conversely, blocking SDF-1 activity with neutralizing antibodies diminished stem cell recruitment and four of five reduced sized fatty liver recipients died. Thus chemokine insuficiency was associated with transplant failure of whole grafts which was overcome by the increased regenerative requirements promoted by the small grafts and mediated by SDF-1 resulting in stem cell influx. PMID:22994609

  12. The Quantitative LOD Score: Test Statistic and Sample Size for Exclusion and Linkage of Quantitative Traits in Human Sibships

    OpenAIRE

    Page, Grier P.; Amos, Christopher I.; Boerwinkle, Eric

    1998-01-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, ...

  13. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...... in publications, sample size calculations and statistical methods were often explicitly discrepant with the protocol or not pre-specified. Such amendments were rarely acknowledged in the trial publication. The reliability of trial reports cannot be assessed without having access to the full protocols......OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...

  14. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    Science.gov (United States)

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for

  15. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

    Science.gov (United States)

    Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

    2017-10-01

    Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

  16. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  17. Determining Cutoff Point of Ensemble Trees Based on Sample Size in Predicting Clinical Dose with DNA Microarray Data.

    Science.gov (United States)

    Yılmaz Isıkhan, Selen; Karabulut, Erdem; Alpar, Celal Reha

    2016-01-01

    Background/Aim. Evaluating the success of dose prediction based on genetic or clinical data has substantially advanced recently. The aim of this study is to predict various clinical dose values from DNA gene expression datasets using data mining techniques. Materials and Methods. Eleven real gene expression datasets containing dose values were included. First, important genes for dose prediction were selected using iterative sure independence screening. Then, the performances of regression trees (RTs), support vector regression (SVR), RT bagging, SVR bagging, and RT boosting were examined. Results. The results demonstrated that a regression-based feature selection method substantially reduced the number of irrelevant genes from raw datasets. Overall, the best prediction performance in nine of 11 datasets was achieved using SVR; the second most accurate performance was provided using a gradient-boosting machine (GBM). Conclusion. Analysis of various dose values based on microarray gene expression data identified common genes found in our study and the referenced studies. According to our findings, SVR and GBM can be good predictors of dose-gene datasets. Another result of the study was to identify the sample size of n = 25 as a cutoff point for RT bagging to outperform a single RT.

  18. Treatment with the gap junction modifier rotigaptide (ZP123) reduces infarct size in rats with chronic myocardial infarction

    DEFF Research Database (Denmark)

    Haugan, Ketil; Marcussen, Niels; Kjølbye, Anne Louise

    2006-01-01

    Treatment with non-selective drugs (eg, long-chain alcohols, halothane) that reduce gap junction intercellular communication (GJIC) is associated with reduced infarct size after myocardial infarction (MI). Therefore, it has been suggested that gap junction intercellular communication stimulating...... compounds may increase infarct size. The antiarrhythmic peptide analogue rotigaptide (ZP123) increases cardiac gap junction intercellular communication and the purpose of the present study was to examine the effects of rotigaptide treatment on infarct size. Myocardial infarction was induced in male rats...... by ligation of the left anterior descending artery (LAD). Rats (n = 156) were treated with rotigaptide at three dose levels or vehicle from the onset of ischemia and for 3 weeks following LAD occlusion. Infarct size was determined using histomorphometry after 3 weeks treatment. Rotigaptide treatment producing...

  19. Sulphate reducing activity detected in soil samples from Antarctica, Ecology Glacier Forefield, King George Island.

    Science.gov (United States)

    Wolicka, Dorota; Zdanowski, Marek K; Żmuda-Baranowska, Magdalena J; Poszytek, Anna; Grzesiak, Jakub

    2014-01-01

    We determined sulphate-reducing activities in media inoculated with soils and with kettle lake sediments in order to investigate their potential in geomicrobiological processes in low-temperature, terrestrial maritime Antarctic habitats. Soil and sediment samples were collected in a glacier valley abandoned by Ecology Glacier during the last 30 years: from a new formed kettle lake sediment and forefield soil derived from ground moraine. Inoculated with these samples, liquid Postgate C and minimal media supplemented with various carbon sources as electron donors were incubated for 8 weeks at 4°C. High rates of sulphate reduction were observed only in media inoculated with soil. No sulphate reduction was detected in media inoculated with kettle lake sediments. In soil samples culture media calcite and elemental sulphur deposits were observed, demonstrating that sulphate-reducing activity is associated with a potential to mineral formation in cold environments. Cells observed on scanning microscopy (SEM) micrographs of post-culture-soil deposits could be responsible for sulphate-reducing activity.

  20. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    Science.gov (United States)

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  1. Reduced glomerular size- and charge-selectivity in clinically healthy individuals with microalbuminuria

    DEFF Research Database (Denmark)

    Jensen, J S; Borch-Johnsen, K; Deckert, T

    1995-01-01

    The pathophysiologic mechanism behind microalbuminuria, a potential atherosclerotic risk factor, was explored by measuring fractional clearances of four endogenous plasma proteins of different size and electric charge (albumin, beta 2-microglobulin, immunoglobulin G, and immunoglobulin G4). Twenty...

  2. Invasive surgery reduces infarct size and preserves cardiac function in a porcine model of myocardial infarction

    NARCIS (Netherlands)

    G.P.J. van Hout (G. P J); M.P.J. Teuben (Michel P.J.); M. Heeres (Marjolein); S. de Maat (Steven); R. Jong (Rosa); C. Maas (Coen); L.H.J.A. Kouwenberg (Lisanne H.J.A.); L. Koenderman (Leo); W.W. van Solinge (Wouter W.); S.C.A. de Jager (Saskia); G. Pasterkamp (Gerard); I.E. Hoefer (Imo)

    2015-01-01

    textabstract& Sons Ltd and Foundation for Cellular and Molecular Medicine. Reperfusion injury following myocardial infarction (MI) increases infarct size (IS) and deteriorates cardiac function. Cardioprotective strategies in large animal MI models often failed in clinical trials, suggesting

  3. Invasive surgery reduces infarct size and preserves cardiac function in a porcine model of myocardial infarction

    NARCIS (Netherlands)

    van Hout, Gerardus P J; Teuben, Michel P J; Heeres, Marjolein; de Maat, Steven; de Jong, Renate; Maas, Coen; Kouwenberg, Lisanne H J A; Koenderman, Leo; van Solinge, Wouter W; de Jager, Saskia C A; Pasterkamp, Gerard; Höfer, IE

    2015-01-01

    Reperfusion injury following myocardial infarction (MI) increases infarct size (IS) and deteriorates cardiac function. Cardioprotective strategies in large animal MI models often failed in clinical trials, suggesting translational failure. Experimentally, MI is induced artificially and the effect of

  4. Implicit sampling combined with reduced order modeling for the inversion of vadose zone hydrological data

    Science.gov (United States)

    Liu, Yaning; Pau, George Shu Heng; Finsterle, Stefan

    2017-11-01

    Bayesian inverse modeling techniques are computationally expensive because many forward simulations are needed when sampling the posterior distribution of the parameters. In this paper, we combine the implicit sampling method and generalized polynomial chaos expansion (gPCE) to significantly reduce the computational cost of performing Bayesian inverse modeling. There are three steps in this approach: (1) find the maximizer of the likelihood function using deterministic approaches; (2) construct a gPCE-based surrogate model using the results from a limited number of forward simulations; and (3) efficiently sample the posterior distribution of the parameters using implicit sampling method. The cost of constructing the gPCE-based surrogate model is further decreased by using sparse Bayesian learning to reduce the number of gPCE coefficients that have to be determined. We demonstrate the approach for a synthetic ponded infiltration experiment simulated with TOUGH2. The surrogate model is highly accurate with mean relative error that is method or a Markov chain Monte Carlo method utilizing the full model.

  5. Reduced size-independent mechanical properties of cortical bone in high-fat diet-induced obesity.

    Science.gov (United States)

    Ionova-Martin, S S; Do, S H; Barth, H D; Szadkowska, M; Porter, A E; Ager, J W; Ager, J W; Alliston, T; Vaisse, C; Ritchie, R O

    2010-01-01

    Overweight and obesity are rapidly expanding health problems in children and adolescents. Obesity is associated with greater bone mineral content that might be expected to protect against fracture, which has been observed in adults. Paradoxically, however, the incidence of bone fractures has been found to increase in overweight and obese children and adolescents. Prior studies have shown some reduced mechanical properties as a result of high-fat diet (HFD) but do not fully address size-independent measures of mechanical properties, which are important to understand material behavior. To clarify the effects of HFD on the mechanical properties and microstructure of bone, femora from C57BL/6 mice fed either a HFD or standard laboratory chow (Chow) were evaluated for structural changes and tested for bending strength, bending stiffness and fracture toughness. Here, we find that in young, obese, high-fat fed mice, all geometric parameters of the femoral bone, except length, are increased, but strength, bending stiffness, and fracture toughness are all reduced. This increased bone size and reduced size-independent mechanical properties suggests that obesity leads to a general reduction in bone quality despite an increase in bone quantity; yield and maximum loads, however, remained unchanged, suggesting compensatory mechanisms. We conclude that diet-induced obesity increases bone size and reduces size-independent mechanical properties of cortical bone in mice. This study indicates that bone quantity and bone quality play important compensatory roles in determining fracture risk. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  6. A Self-Organizing Map-Based Approach to Generating Reduced-Size, Statistically Similar Climate Datasets

    Science.gov (United States)

    Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.

    2015-12-01

    Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap

  7. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  8. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    SUMMARY Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average meas...... subsp. paratuberculosis infection, in Danish dairy cattle and a study on critical control points for Salmonella cross-contamination of pork, in Greek slaughterhouses....

  9. Acarbose reduces myocardial infarct size by preventing postprandial hyperglycemia and hydroxyl radical production and opening mitochondrial KATP channels in rabbits.

    Science.gov (United States)

    Minatoguchi, Shinya; Zhang, Zengi; Bao, Narentuoya; Kobayashi, Hiroyuki; Yasuda, Shinji; Iwasa, Masamitsu; Sumi, Syouhei; Kawamura, Itta; Yamada, Yoshihisa; Nishigaki, Kazuhiko; Takemura, Genzou; Fujiwara, Takako; Fujiwara, Hisayoshi

    2009-07-01

    Acarbose, an antidiabetic drug, is an alpha-glucosidase inhibitor that can inhibit glucose absorption in the intestine. A recent large-scale clinical trial, STOP-NIDDM, showed that acarbose reduces the risk of myocardial infarction. We examined whether acarbose reduces myocardial infarct size and investigated its mechanisms. Rabbits were fed with 1 of 2 diets in this study: normal chow, 30 mg acarbose per 100 g chow for 7 days. Rabbits were assigned randomly to 1 of 4 groups: control (n = 10), acarbose (n = 10), acarbose + 5HD (n = 10, intravenous 5 mg/kg of 5-hydroxydecanoate), and 5HD (n = 10, intravenous 5 mg/kg of 5HD). Rabbits then underwent 30 minutes of coronary occlusion followed by 48-hour reperfusion. Postprandial blood glucose levels were higher in the control group than in the acarbose group. The infarct size as a percentage of the left ventricular area at risk was reduced significantly in the acarbose (19.4% +/- 2.3%) compared with the control groups (42.8% +/- 5.4%). The infarct size-reducing effect of acarbose was abolished by 5HD (43.4% +/- 4.7%). Myocardial interstitial 2,5-dihydroxybenzoic acid levels, an indicator of hydroxyl radicals, increased during reperfusion after 30 minutes of ischemia, but this increase was inhibited in the acarbose group. This was reversed by 5HD. Acarbose reduces myocardial infarct size by opening mitochondrial KATP channels, which may be related to the prevention of postprandial hyperglycemia and hydroxyl radical production.

  10. Validation of fixed sample size plans for monitoring lepidopteran pests of Brassica oleracea crops in North Korea.

    Science.gov (United States)

    Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J

    2009-06-01

    The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.

  11. A simple method to generate equal-sized homogenous strata or clusters for population-based sampling.

    Science.gov (United States)

    Elliott, Michael R

    2011-04-01

    Statistical efficiency and cost efficiency can be achieved in population-based samples through stratification and/or clustering. Strata typically combine subgroups of the population that are similar with respect to an outcome. Clusters are often taken from preexisting units, but may be formed to minimize between-cluster variance, or to equalize exposure to a treatment or risk factor. Area probability sample design procedures for the National Children's Study required contiguous strata and clusters that maximized within-stratum and within-cluster homogeneity while maintaining approximately equal size of the strata or clusters. However, there were few methods that allowed such strata or clusters to be constructed under these contiguity and equal size constraints. A search algorithm generates equal-size cluster sets that approximately span the space of all possible clusters of equal size. An optimal cluster set is chosen based on analysis of variance and convexity criteria. The proposed algorithm is used to construct 10 strata based on demographics and air pollution measures in Kent County, MI, following census tract boundaries. A brief simulation study is also conducted. The proposed algorithm is effective at uncovering underlying clusters from noisy data. It can be used in multi-stage sampling where equal-size strata or clusters are desired. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Effects of sample size on differential gene expression, rank order and prediction accuracy of a gene signature.

    Directory of Open Access Journals (Sweden)

    Cynthia Stretch

    Full Text Available Top differentially expressed gene lists are often inconsistent between studies and it has been suggested that small sample sizes contribute to lack of reproducibility and poor prediction accuracy in discriminative models. We considered sex differences (69♂, 65 ♀ in 134 human skeletal muscle biopsies using DNA microarray. The full dataset and subsamples (n = 10 (5 ♂, 5 ♀ to n = 120 (60 ♂, 60 ♀ thereof were used to assess the effect of sample size on the differential expression of single genes, gene rank order and prediction accuracy. Using our full dataset (n = 134, we identified 717 differentially expressed transcripts (p<0.0001 and we were able predict sex with ~90% accuracy, both within our dataset and on external datasets. Both p-values and rank order of top differentially expressed genes became more variable using smaller subsamples. For example, at n = 10 (5 ♂, 5 ♀, no gene was considered differentially expressed at p<0.0001 and prediction accuracy was ~50% (no better than chance. We found that sample size clearly affects microarray analysis results; small sample sizes result in unstable gene lists and poor prediction accuracy. We anticipate this will apply to other phenotypes, in addition to sex.

  13. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Science.gov (United States)

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  14. Analytical solutions to sampling effects in drop size distribution measurements during stationary rainfall: Estimation of bulk rainfall variables

    NARCIS (Netherlands)

    Uijlenhoet, R.; Porrà, J.M.; Sempere Torres, D.; Creutin, J.D.

    2006-01-01

    A stochastic model of the microstructure of rainfall is used to derive explicit expressions for the magnitude of the sampling fluctuations in rainfall properties estimated from raindrop size measurements in stationary rainfall. The model is a marked point process, in which the points represent the

  15. Survey Research: Determining Sample Size and Representative Response. and The Effects of Computer Use on Keyboarding Technique and Skill.

    Science.gov (United States)

    Wunsch, Daniel R.; Gades, Robert E.

    1986-01-01

    Two articles are presented. The first reviews and suggests procedures for determining appropriate sample sizes and for determining the response representativeness in survey research. The second presents a study designed to determine the effects of computer use on keyboarding technique and skill. (CT)

  16. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    Science.gov (United States)

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  17. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    Science.gov (United States)

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. An Informatics Based Approach to Reduce the Grain Size of Cast Hadfield Steel

    Science.gov (United States)

    Dey, Swati; Pathak, Shankha; Sheoran, Sumit; Kela, Damodar H.; Datta, Shubhabrata

    2016-04-01

    Materials Informatics concept using computational intelligence based approaches are employed to bring out the significant alloying additions to achieve grain refinement in cast Hadfield steel. Castings of Hadfield steels used for railway crossings, requires fine grained austenitic structure. Maintaining proper grain size of this component is very crucial in order to achieve the desired properties and service life. This work studies the important variables affecting the grain size of such steels which includes the compositional and processing variables. The computational findings and prior knowledge is used to design the alloy, which is subjected to a few trials to validate the findings.

  19. Low vacuum and discard tubes reduce hemolysis in samples drawn from intravenous catheters.

    Science.gov (United States)

    Heiligers-Duckers, Connie; Peters, Nathalie A L R; van Dijck, Jose J P; Hoeijmakers, Jan M J; Janssen, Marcel J W

    2013-08-01

    In-vitro hemolysis is a great challenge to emergency departments where blood is drawn from intravenous catheters (IVCs). Although high quality samples can be obtained by straight needle venipuncture, IVCs are preferred for various reasons. The aim of this study was to identify blood collection practices that reduce hemolysis while using IVC. The study was conducted at an emergency department where blood is drawn in ≥ 90% of patients from IVC. Hemolysis, measured spectrophotometrically, was compared between syringe and vacuum tubes. The following practices were tested in combination with vacuum collection; a Luer-slip adapter, a Luer-lock adapter, discard tubes and low vacuum tubes. Each intervention lasted 1 week and retrieved 154 to 297 samples. As reference, hemolysis was also measured in vacuum tubes retrieved from departments where only straight needle venipuncture is performed. Vacuum collection led to more hemolytic samples compared with syringe tubes (24% versus 16% respectively, p=0.008). No difference in hemolysis was observed between the Luer-slip and the Luer-lock adapter. The use of discard (17% hemolytic, p=0.045) and low vacuum tubes (12% hemolytic, pvacuum tubes reduce hemolysis while drawing blood from IVC. Of these practices the use of a low vacuum tube is preferred considering the less volume of blood and the amount of tubes drawn. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Improving Precision and Reducing Runtime of Microscopic Traffic Simulators through Stratified Sampling

    Directory of Open Access Journals (Sweden)

    Khewal Bhupendra Kesur

    2013-01-01

    Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.

  1. Bayesian adaptive approach to estimating sample sizes for seizures of illicit drugs.

    Science.gov (United States)

    Moroni, Rossana; Aalberg, Laura; Reinikainen, Tapani; Corander, Jukka

    2012-01-01

    A considerable amount of discussion can be found in the forensics literature about the issue of using statistical sampling to obtain for chemical analyses an appropriate subset of units from a police seizure suspected to contain illicit material. Use of the Bayesian paradigm has been suggested as the most suitable statistical approach to solving the question of how large a sample needs to be to ensure legally and practically acceptable purposes. Here, we introduce a hypergeometric sampling model combined with a specific prior distribution for the homogeneity of the seizure, where a parameter for the analyst's expectation of homogeneity (α) is included. Our results show how an adaptive approach to sampling can minimize the practical efforts needed in the laboratory analyses, as the model allows the scientist to decide sequentially how to proceed, while maintaining a sufficiently high confidence in the conclusions. © 2011 American Academy of Forensic Sciences.

  2. Community size and metabolic rates of psychrophilic sulfate-reducing bacteria in Arctic marine sediments

    DEFF Research Database (Denmark)

    Knoblauch, C.; Jørgensen, BB; Harder, J.

    1999-01-01

    of 19 isolated psychrophiles were compared to corresponding rates of 9 marine, mesophilic sulfate-reducing bacteria. The results indicate that, as a physiological adaptation to the permanently cold Arctic environment, psychrophilic sulfate reducers have considerably higher specific metabolic rates than...

  3. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  4. Sample Size Effect of Magnetomechanical Response for Magnetic Elastomers by Using Permanent Magnets

    Directory of Open Access Journals (Sweden)

    Tsubasa Oguro

    2017-01-01

    Full Text Available The size effect of magnetomechanical response of chemically cross-linked disk shaped magnetic elastomers placed on a permanent magnet has been investigated by unidirectional compression tests. A cylindrical permanent magnet with a size of 35 mm in diameter and 15 mm in height was used to create the magnetic field. The magnetic field strength was approximately 420 mT at the center of the upper surface of the magnet. The diameter of the magnetoelastic polymer disks was varied from 14 mm to 35 mm, whereas the height was kept constant (5 mm in the undeformed state. We have studied the influence of the disk diameter on the stress-strain behavior of the magnetoelastic in the presence and in the lack of magnetic field. It was found that the smallest magnetic elastomer with 14 mm diameter did not exhibit measurable magnetomechanical response due to magnetic field. On the opposite, the magnetic elastomers with diameters larger than 30 mm contracted in the direction parallel to the mechanical stress and largely elongated in the perpendicular direction. An explanation is put forward to interpret this size-dependent behavior by taking into account the nonuniform field distribution of magnetic field produced by the permanent magnet.

  5. Optimal foraging on perilous prey : risk of bill damage reduces optimal prey size in oystercatchers

    NARCIS (Netherlands)

    Rutten, AL; Oosterbeek, K; Ens, Bruno J.; Verhulst, S

    2006-01-01

    Intake rate maximization alone is not. always sufficient in explaining prey size selection in predators. For example, bivalve-feeding oystercatchers regularly select. smaller prey than expected if they aimed to maximize their intake rat-C. It has been proposed that. to these birds large prey are

  6. Optimal foraging on perilous prey: risk of bill damage reduces optimal prey size in oystercatchers

    NARCIS (Netherlands)

    Rutten, A.L.; Oosterbeek, K.H.; Ens, B.J.; Verhulst, S.

    2006-01-01

    Intake rate maximization alone is not always sufficient in explaining prey size selection in predators. For example, bivalve-feeding oystercatchers regularly select smaller prey than expected if they aimed to maximize their intake rate. It has been proposed that to these birds large prey are

  7. Buffer sizing to reduce interference and increase throughput of real-time stream processing applications

    NARCIS (Netherlands)

    Kurtin, Philip Sebastian; Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit

    2015-01-01

    Existing temporal analysis and buffer sizing techniques for real-time stream processing applications ignore that FIFO buffers bound interference between tasks on the same processor. By considering this effect it can be shown that a reduction of buffer capacities can result in a higher throughput.

  8. Rapid short-duration hypothermia with cold saline and endovascular cooling before reperfusion reduces microvascular obstruction and myocardial infarct size

    Directory of Open Access Journals (Sweden)

    Heiberg Einar

    2008-04-01

    Full Text Available Abstract Background The aim of this study was to evaluate the combination of a rapid intravenous infusion of cold saline and endovascular hypothermia in a closed chest pig infarct model. Methods Pigs were randomized to pre-reperfusion hypothermia (n = 7, post-reperfusion hypothermia (n = 7 or normothermia (n = 5. A percutaneous coronary intervention balloon was inflated in the left anterior descending artery for 40 min. Hypothermia was started after 25 min of ischemia or immediately after reperfusion by infusion of 1000 ml of 4°C saline and endovascular hypothermia. Area at risk was evaluated by in vivo SPECT. Infarct size was evaluated by ex vivo MRI. Results Pre-reperfusion hypothermia reduced infarct size/area at risk by 43% (46 ± 8% compared to post-reperfusion hypothermia (80 ± 6%, p Conclusion Rapid hypothermia with cold saline and endovascular cooling before reperfusion reduces myocardial infarct size and microvascular obstruction. A novel finding is that hypothermia at the onset of reperfusion reduces microvascular obstruction without reducing myocardial infarct size. Intravenous administration of cold saline combined with endovascular hypothermia provides a method for a rapid induction of hypothermia suggesting a potential clinical application.

  9. Comparing two psychological interventions in reducing impulsive processes of eating behaviour: Effects on self-selected portion size

    NARCIS (Netherlands)

    Koningsbruggen, G.M. van; Veling, H.P.; Stroebe, W.; Aarts, H.A.G.

    2014-01-01

    Objective Palatable food, such as sweets, contains properties that automatically trigger the impulse to consume it even when people have goals or intentions to refrain from consuming such food. We compared the effectiveness of two interventions in reducing the portion size of palatable food that

  10. Comparing two psychological interventions in reducing impulsive processes of eating behaviour : Effects on self-selected portion size

    NARCIS (Netherlands)

    van Koningsbruggen, G.M.; Veling, H.P.; Stroebe, Wolfgang; Aarts, Henk

    2014-01-01

    ObjectivePalatable food, such as sweets, contains properties that automatically trigger the impulse to consume it even when people have goals or intentions to refrain from consuming such food. We compared the effectiveness of two interventions in reducing the portion size of palatable food that

  11. BIOREACTOR ECONOMICS, SIZE AND TIME OF OPERATION (BEST) COMPUTER SIMULATOR FOR DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS

    Science.gov (United States)

    BEST (bioreactor economics, size and time of operation) is an Excel™ spreadsheet-based model that is used in conjunction with the public domain geochemical modeling software, PHREEQCI. The BEST model is used in the design process of sulfate-reducing bacteria (SRB) field bioreacto...

  12. Derivatives of a statically reduced stiffness matrix with respect to sizing variables. [for aircraft weight minimization with flutter constraints

    Science.gov (United States)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1976-01-01

    An expression is obtained for the first derivatives with respect to the sizing variables of a statically reduced stiffness matrix that is a nonlinear function of the sizing variables, where the unreduced stiffness matrix is a linear function of the sizing variables. An accepted procedure to reduce the number of degrees of freedom is to eliminate a number of nodal displacements from the degrees of freedom such that the accuracy of the flutter analysis is not significantly affected. In a typical optimization procedure with flutter constraints, the derivative of the stiffness matrix may be used in a form that contains the characteristic vector of the flutter matrix equation and the transpose of the characteristic vector of the adjoint flutter matrix equation corresponding to a particular solution of the flutter equation.

  13. Measurements of Plutonium and Americium in Soil Samples from Project 57 using the Suspended Soil Particle Sizing System (SSPSS)

    Energy Technology Data Exchange (ETDEWEB)

    John L. Bowen; Rowena Gonzalez; David S. Shafer

    2001-05-01

    As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site.

  14. Novel, selective EPO receptor ligands lacking erythropoietic activity reduce infarct size in acute myocardial infarction in rats.

    Science.gov (United States)

    Kiss, Krisztina; Csonka, Csaba; Pálóczi, János; Pipis, Judit; Görbe, Anikó; Kocsis, Gabriella F; Murlasits, Zsolt; Sárközy, Márta; Szűcs, Gergő; Holmes, Christopher P; Pan, Yijun; Bhandari, Ashok; Csont, Tamás; Shamloo, Mehrdad; Woodburn, Kathryn W; Ferdinandy, Péter; Bencsik, Péter

    2016-11-01

    Erythropoietin (EPO) has been shown to protect the heart against acute myocardial infarction in pre-clinical studies, however, EPO failed to reduce infarct size in clinical trials and showed significant safety problems. Here, we investigated cardioprotective effects of two selective non-erythropoietic EPO receptor ligand dimeric peptides (AF41676 and AF43136) lacking erythropoietic activity, EPO, and the prolonged half-life EPO analogue, darbepoetin in acute myocardial infarction (AMI) in rats. In a pilot study, EPO at 100U/mL significantly decreased cell death compared to vehicle (33.8±2.3% vs. 40.3±1.5%, pInfarct size (IS) was measured by standard TTC staining. In study 1, 5000U/kg EPO reduced infarct size significantly compared to vehicle (45.3±4.8% vs. 59.8±4.5%, pinfarct size-reducing effect at 5μg/kg compared to the vehicle (44.4±5.7% vs. 65.9±2.7%, pinfarct size in studies 1-3 by approximately 35%. In study 4, AF43136 at 10mg/kg decreased infarct size, similarly to the positive control CsA compared to the appropriate vehicle (39.4±5.9% vs. 58.1±5.4% and 45.9±2.4% vs. 63.8±4.1%, pinfarct size in a rat model of AMI. Therefore, non-erythropoietic EPO receptor peptide ligands may be promising cardioprotective agents. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study.

    Science.gov (United States)

    Teare, M Dawn; Dimairo, Munyaradzi; Shephard, Neil; Hayman, Alex; Whitehead, Amy; Walters, Stephen J

    2014-07-03

    External pilot or feasibility studies can be used to estimate key unknown parameters to inform the design of the definitive randomised controlled trial (RCT). However, there is little consensus on how large pilot studies need to be, and some suggest inflating estimates to adjust for the lack of precision when planning the definitive RCT. We use a simulation approach to illustrate the sampling distribution of the standard deviation for continuous outcomes and the event rate for binary outcomes. We present the impact of increasing the pilot sample size on the precision and bias of these estimates, and predicted power under three realistic scenarios. We also illustrate the consequences of using a confidence interval argument to inflate estimates so the required power is achieved with a pre-specified level of confidence. We limit our attention to external pilot and feasibility studies prior to a two-parallel-balanced-group superiority RCT. For normally distributed outcomes, the relative gain in precision of the pooled standard deviation (SDp) is less than 10% (for each five subjects added per group) once the total sample size is 70. For true proportions between 0.1 and 0.5, we find the gain in precision for each five subjects added to the pilot sample is less than 5% once the sample size is 60. Adjusting the required sample sizes for the imprecision in the pilot study estimates can result in excessively large definitive RCTs and also requires a pilot sample size of 60 to 90 for the true effect sizes considered here. We recommend that an external pilot study has at least 70 measured subjects (35 per group) when estimating the SDp for a continuous outcome. If the event rate in an intervention group needs to be estimated by the pilot then a total of 60 to 100 subjects is required. Hence if the primary outcome is binary a total of at least 120 subjects (60 in each group) may be required in the pilot trial. It is very much more efficient to use a larger pilot study, than to

  16. A convenient method and numerical tables for sample size determination in longitudinal-experimental research using multilevel models.

    Science.gov (United States)

    Usami, Satoshi

    2014-12-01

    Recent years have shown increased awareness of the importance of sample size determination in experimental research. Yet effective and convenient methods for sample size determination, especially in longitudinal experimental design, are still under development, and application of power analysis in applied research remains limited. This article presents a convenient method for sample size determination in longitudinal experimental research using a multilevel model. A fundamental idea of this method is transformation of model parameters (level 1 error variance [σ(2)], level 2 error variances [τ 00, τ 11] and its covariance [τ 01, τ 10], and a parameter representing experimental effect [δ]) into indices (reliability of measurement at the first time point [ρ 1], effect size at the last time point [Δ T ], proportion of variance of outcomes between the first and the last time points [k], and level 2 error correlation [r]) that are intuitively understandable and easily specified. To foster more convenient use of power analysis, numerical tables are constructed that refer to ANOVA results to investigate the influence on statistical power by respective indices.

  17. [Comparison of characteristics of heavy metals in different grain sizes of intertidalite sediment by using grid sampling method].

    Science.gov (United States)

    Liang, Tao; Chen, Yan; Zhang, Chao-sheng; Li, Hai-tao; Chong, Zhong-yi; Song, Wen-chong

    2008-02-01

    384 surface sediment samples were collected from mud flat, silt flat and mud-silt flat of Bohai Bay by 1 m and 10 m interval using grid sampling method. Concentrations of Al, Fe, Ti, Mn, Ba, Sr, Zn, Cr, Ni and Cu in each sample were measured by ICP-AES. To figure out the random distribution and concentration characteristics of these heavy metals, concentration of them were compared between districts with different grain size. The results show that varieties of grain size cause the remarkable difference in the concentration of heavy metals. Total concentration of heavy metals are 147.37 g x kg(-1), 98.68 g x kg(-1) and 94.27 g x kg(-1) in mud flat, mud-silt flat and silt flat respectively. Majority of heavy metals inclines to concentrate in fine grained mud, while Ba and Sr have a tendency to concentrate in coast grained silt which contains more K2O * Al2O3 * 6SiO2. Concentration of Sr is affected significantly by the grain size, while concentration of Cr and Ti are affected a little by the grain size.

  18. Reduced Particle size of plant material does not stimulate decomposition but affects the microbivorous microfauna

    DEFF Research Database (Denmark)

    Vestergaard, Peter; Rønn, Regin; Christensen, Søren

    2001-01-01

    in soils amended with the large pieces on nine out of 10 occasions. Microbial biomass measured as SIR was significantly higher in soils with maize than in those amended with barley, but no effect of particle size was observed (three-way ANOVA, P... material, but significantly higher numbers were found in soil with finely-ground maize than in soil with large pieces (two-way ANOVA, P... barley (three-way ANOVA, P

  19. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort.

    Science.gov (United States)

    Cantarello, Elena; Steck, Claude E; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

    2010-03-01

    Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy's regions (average area 15,000 km(2)) and provinces (2,900 km(2)). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

  20. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort

    Science.gov (United States)

    Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

    2010-03-01

    Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

  1. Early detection of nonnative alleles in fish populations: When sample size actually matters

    Science.gov (United States)

    Croce, Patrick Della; Poole, Geoffrey C.; Payne, Robert A.; Gresswell, Bob

    2017-01-01

    Reliable detection of nonnative alleles is crucial for the conservation of sensitive native fish populations at risk of introgression. Typically, nonnative alleles in a population are detected through the analysis of genetic markers in a sample of individuals. Here we show that common assumptions associated with such analyses yield substantial overestimates of the likelihood of detecting nonnative alleles. We present a revised equation to estimate the likelihood of detecting nonnative alleles in a population with a given level of admixture. The new equation incorporates the effects of the genotypic structure of the sampled population and shows that conventional methods overestimate the likelihood of detection, especially when nonnative or F-1 hybrid individuals are present. Under such circumstances—which are typical of early stages of introgression and therefore most important for conservation efforts—our results show that improved detection of nonnative alleles arises primarily from increasing the number of individuals sampled rather than increasing the number of genetic markers analyzed. Using the revised equation, we describe a new approach to determining the number of individuals to sample and the number of diagnostic markers to analyze when attempting to monitor the arrival of nonnative alleles in native populations.

  2. Children's Use of Sample Size and Diversity Information within Basic-Level Categories.

    Science.gov (United States)

    Gutheil, Grant; Gelman, Susan A.

    1997-01-01

    Three studies examined the ability of 8- and 9-year-olds and young adults to use sample monotonicity and diversity information according to the similarity-coverage model of category-based induction. Found that children's difficulty with this information was independent of category level, and may be based on preferences for other strategies…

  3. Joint risk of interbasin water transfer and impact of the window size of sampling low flows under environmental change

    Science.gov (United States)

    Tu, Xinjun; Du, Xiaoxia; Singh, Vijay P.; Chen, Xiaohong; Du, Yiliang; Li, Kun

    2017-11-01

    Constructing a joint distribution of low flows between the donor and recipient basins and analyzing their joint risk are commonly required for implementing interbasin water transfer. In this study, daily streamflow data of bi-basin low flows were sampled at window sizes from 3 to183 days by using the annual minimum method. The stationarity of low flows was tested by a change point analysis and non-stationary low flows were reconstructed by using the moving mean method. Three bivariate Archimedean copulas and five common univariate distributions were applied to fit the joint and marginal distributions of bi-basin low flows. Then, by considering the window size of sampling low flows under environmental change, the change in the joint risk of interbasin water transfer was investigated. Results showed that the non-stationarity of low flows in the recipient basin at all window sizes was significant due to the regulation of water reservoirs. The general extreme value distribution was found to fit the marginal distributions of bi-basin low flows. Three Archimedean copulas satisfactorily fitted the joint distribution of bi-basin low flows and then the Frank copula was found to be the comparatively better. The moving mean method differentiated the location parameter of the GEV distribution, but did not differentiate the scale and shape parameters, and the copula parameters. Due to environmental change, in particular the regulation of water reservoirs in the recipient basin, the decrease of the joint synchronous risk of bi-basin water shortage was slight, but those of the synchronous assurance of water transfer from the donor were remarkable. With the enlargement of window size of sampling low flows, both the joint synchronous risk of bi-basin water shortage, and the joint synchronous assurance of water transfer from the donor basin when there was a water shortage in the recipient basin exhibited a decreasing trend, but their changes were with a slight fluctuation, in

  4. Size-dependent ultrafast ionization dynamics of nanoscale samples in intense femtosecond x-ray free-electron-laser pulses.

    Science.gov (United States)

    Schorb, Sebastian; Rupp, Daniela; Swiggers, Michelle L; Coffee, Ryan N; Messerschmidt, Marc; Williams, Garth; Bozek, John D; Wada, Shin-Ichi; Kornilov, Oleg; Möller, Thomas; Bostedt, Christoph

    2012-06-08

    All matter exposed to intense femtosecond x-ray pulses from the Linac Coherent Light Source free-electron laser is strongly ionized on time scales competing with the inner-shell vacancy lifetimes. We show that for nanoscale objects the environment, i.e., nanoparticle size, is an important parameter for the time-dependent ionization dynamics. The Auger lifetimes of large Ar clusters are found to be increased compared to small clusters and isolated atoms, due to delocalization of the valence electrons in the x-ray-induced nanoplasma. As a consequence, large nanometer-sized samples absorb intense femtosecond x-ray pulses less efficiently than small ones.

  5. Quantification and size characterisation of silver nanoparticles in environmental aqueous samples and consumer products by single particle-ICPMS.

    Science.gov (United States)

    Aznar, Ramón; Barahona, Francisco; Geiss, Otmar; Ponti, Jessica; José Luis, Tadeo; Barrero-Moreno, Josefa

    2017-12-01

    Single particle-inductively coupled plasma mass spectrometry (SP-ICPMS) is a promising technique able to generate the number based-particle size distribution (PSD) of nanoparticles (NPs) in aqueous suspensions. However, SP-ICPMS analysis is not consolidated as routine-technique yet and is not typically applied to real test samples with unknown composition. This work presents a methodology to detect, quantify and characterise the number-based PSD of Ag-NPs in different environmental aqueous samples (drinking and lake waters), aqueous samples derived from migration tests and consumer products using SP-ICPMS. The procedure is built from a pragmatic view and involves the analysis of serial dilutions of the original sample until no variation in the measured size values is observed while keeping particle counts proportional to the dilution applied. After evaluation of the analytical figures of merit, the SP-ICPMS method exhibited excellent linearity (r2>0.999) in the range (1-25) × 104 particlesmL-1 for 30, 50 and 80nm nominal size Ag-NPs standards. The precision in terms of repeatability was studied according to the RSDs of the measured size and particle number concentration values and a t-test (p = 95%) at the two intermediate concentration levels was applied to determine the bias of SP-ICPMS size values compared to reference values. The method showed good repeatability and an overall acceptable bias in the studied concentration range. The experimental minimum detectable size for Ag-NPs ranged between 12 and 15nm. Additionally, results derived from direct SP-ICPMS analysis were compared to the results conducted for fractions collected by asymmetric flow-field flow fractionation and supernatant fractions after centrifugal filtration. The method has been successfully applied to determine the presence of Ag-NPs in: lake water; tap water; tap water filtered by a filter jar; seven different liquid silver-based consumer products; and migration solutions (pure water and

  6. Estimating Population Size for Capercaillie (Tetrao urogallus L. with Spatial Capture-Recapture Models Based on Genotypes from One Field Sample.

    Directory of Open Access Journals (Sweden)

    Pierre Mollet

    Full Text Available We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex. The SCR model yielded a total population size estimate (posterior mean of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130-147. The observed sex ratio was skewed towards males (0.63. The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54-0.61, suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.

  7. Sample size calculation based on generalized linear models for differential expression analysis in RNA-seq data.

    Science.gov (United States)

    Li, Chung-I; Shyr, Yu

    2016-12-01

    As RNA-seq rapidly develops and costs continually decrease, the quantity and frequency of samples being sequenced will grow exponentially. With proteomic investigations becoming more multivariate and quantitative, determining a study's optimal sample size is now a vital step in experimental design. Current methods for calculating a study's required sample size are mostly based on the hypothesis testing framework, which assumes each gene count can be modeled through Poisson or negative binomial distributions; however, these methods are limited when it comes to accommodating covariates. To address this limitation, we propose an estimating procedure based on the generalized linear model. This easy-to-use method constructs a representative exemplary dataset and estimates the conditional power, all without requiring complicated mathematical approximations or formulas. Even more attractive, the downstream analysis can be performed with current R/Bioconductor packages. To demonstrate the practicability and efficiency of this method, we apply it to three real-world studies, and introduce our on-line calculator developed to determine the optimal sample size for a RNA-seq study.

  8. A spectroscopic sample of massive, quiescent z ∼ 2 galaxies: implications for the evolution of the mass-size relation

    Energy Technology Data Exchange (ETDEWEB)

    Krogager, J.-K.; Zirm, A. W.; Toft, S.; Man, A. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen O (Denmark); Brammer, G. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21210 (United States)

    2014-12-10

    We present deep, near-infrared Hubble Space Telescope/Wide Field Camera 3 grism spectroscopy and imaging for a sample of 14 galaxies at z ≈ 2 selected from a mass-complete photometric catalog in the COSMOS field. By combining the grism observations with photometry in 30 bands, we derive accurate constraints on their redshifts, stellar masses, ages, dust extinction, and formation redshifts. We show that the slope and scatter of the z ∼ 2 mass-size relation of quiescent galaxies is consistent with the local relation, and confirm previous findings that the sizes for a given mass are smaller by a factor of two to three. Finally, we show that the observed evolution of the mass-size relation of quiescent galaxies between z = 2 and 0 can be explained by the quenching of increasingly larger star forming galaxies at a rate dictated by the increase in the number density of quiescent galaxies with decreasing redshift. However, we find that the scatter in the mass-size relation should increase in the quenching-driven scenario in contrast to what is seen in the data. This suggests that merging is not needed to explain the evolution of the median mass-size relation of massive galaxies, but may still be required to tighten its scatter, and explain the size growth of individual z = 2 galaxies quiescent galaxies.

  9. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  10. Deletion of Irs2 causes reduced kidney size in mice: role for inhibition of GSK3beta?

    LENUS (Irish Health Repository)

    Carew, Rosemarie M.

    2010-07-06

    Abstract Background Male Irs2-\\/- mice develop fatal type 2 diabetes at 13-14 weeks. Defects in neuronal proliferation, pituitary development and photoreceptor cell survival manifest in Irs2-\\/- mice. We identify retarded renal growth in male and female Irs2-\\/- mice, independent of diabetes. Results Kidney size and kidney:body weight ratio were reduced by approximately 20% in Irs2-\\/- mice at postnatal day 5 and was maintained in maturity. Reduced glomerular number but similar glomerular density was detected in Irs2-\\/- kidney compared to wild-type, suggesting intact global kidney structure. Analysis of insulin signalling revealed renal-specific upregulation of PKBβ\\/Akt2, hyperphosphorylation of GSK3β and concomitant accumulation of β-catenin in Irs2-\\/- kidney. Despite this, no significant upregulation of β-catenin targets was detected. Kidney-specific increases in Yes-associated protein (YAP), a key driver of organ size were also detected in the absence of Irs2. YAP phosphorylation on its inhibitory site Ser127 was also increased, with no change in the levels of YAP-regulated genes, suggesting that overall YAP activity was not increased in Irs2-\\/- kidney. Conclusions In summary, deletion of Irs2 causes reduced kidney size early in mouse development. Compensatory mechanisms such as increased β-catenin and YAP levels failed to overcome this developmental defect. These data point to Irs2 as an important novel mediator of kidney size.

  11. High-intensity training reduces intermittent hypoxia-induced ER stress and myocardial infarct size.

    Science.gov (United States)

    Bourdier, Guillaume; Flore, Patrice; Sanchez, Hervé; Pepin, Jean-Louis; Belaidi, Elise; Arnaud, Claire

    2016-01-15

    Chronic intermittent hypoxia (IH) is described as the major detrimental factor leading to cardiovascular morbimortality in obstructive sleep apnea (OSA) patients. OSA patients exhibit increased infarct size after a myocardial event, and previous animal studies have shown that chronic IH could be the main mechanism. Endoplasmic reticulum (ER) stress plays a major role in the pathophysiology of cardiovascular disease. High-intensity training (HIT) exerts beneficial effects on the cardiovascular system. Thus, we hypothesized that HIT could prevent IH-induced ER stress and the increase in infarct size. Male Wistar rats were exposed to 21 days of IH (21-5% fraction of inspired O2, 60-s cycle, 8 h/day) or normoxia. After 1 wk of IH alone, rats were submitted daily to both IH and HIT (2 × 24 min, 15-30m/min). Rat hearts were either rapidly frozen to evaluate ER stress by Western blot analysis or submitted to an ischemia-reperfusion protocol ex vivo (30 min of global ischemia/120 min of reperfusion). IH induced cardiac proapoptotic ER stress, characterized by increased expression of glucose-regulated protein kinase 78, phosphorylated protein kinase-like ER kinase, activating transcription factor 4, and C/EBP homologous protein. IH-induced myocardial apoptosis was confirmed by increased expression of cleaved caspase-3. These IH-associated proapoptotic alterations were associated with a significant increase in infarct size (35.4 ± 3.2% vs. 22.7 ± 1.7% of ventricles in IH + sedenary and normoxia + sedentary groups, respectively, P < 0.05). HIT prevented both the IH-induced proapoptotic ER stress and increased myocardial infarct size (28.8 ± 3.9% and 21.0 ± 5.1% in IH + HIT and normoxia + HIT groups, respectively, P = 0.28). In conclusion, these findings suggest that HIT could represent a preventive strategy to limit IH-induced myocardial ischemia-reperfusion damages in OSA patients. Copyright © 2016 the American Physiological Society.

  12. Selection for number of live piglets at five-days of age increased litter size and reduced mortality

    DEFF Research Database (Denmark)

    Nielsen, Bjarne; Madsen, Per; Henryon, Mark

    2012-01-01

    . The heritabilities of maternal effect on litter size were 0.079 and 0.095 in Landrace and Yorkshir e. The heritabilities of maternal effect on piglet-mortality rates were 0.069 and 0.082 in Landrace and Yorkshire. The genetic correlation between litter size and mortality rate were unfavourable; and the estimates......-netic gain has reduced the piglet mortality rate by 4 %-points in Landrace and Yorkshire from 2004 to 2010. The genetics gain was confirmed by decreased phenotypic annual mortality rates in the breeding and multiplier herds....

  13. Influence of reducing conditions on metallic elements released from various contaminated soil samples.

    Science.gov (United States)

    Pareuil, Priscilla; Pénilla, Sonia; Ozkan, Nursen; Bordas, François; Bollinger, Jean-Claude

    2008-10-15

    The redox conditions of soil may have significant consequences for the mobility of metallic elements (ME), but unlike pH, very few studies have investigated this parameter. A procedure was established to study the solubilization of ME from soil samples in various reducing conditions using a batch method and sodium ascorbate solutions. The change in redox potential from +410 to +10 mV was studied from four contaminated soil samples (designated A-D) of different origins and compositions. The results showed that ME mobilization greatly increased with decreasing redox potential within a limited and very sensitive range. Depending on the soil sample studied, various sensitive ranges of potentials were obtained (A, 220-345 mV; B, 280-365 mV; C, 260-360 mV; and D, 240-380 mV), and the induced percentages of ME mobilization varied (i.e., maximal values for Zn: A, 45%; B, 59%; C, 53%; and D, 58%). The results could be explained by the combined effect of potential and pH decrease on ME-carrying phases; in particular, Fe and Mn (oxy)hydroxides.

  14. Herniated disks unchanged over time: Size reduced after oxygen-ozone therapy.

    Science.gov (United States)

    Bonetti, Matteo; Zambello, Alessio; Leonardi, Marco; Princiotta, Ciro

    2016-08-01

    The spontaneous regression of disk herniation secondary to dehydration is a much-debated topic in medicine. Some physicians wonder whether surgical removal of the extruded nucleus pulposus is really necessary when the spontaneous disappearance of a herniated lumbar disk is a well-known phenomenon. Unfortunately, without spontaneous regression, chronic pain leads to progressive disability for which surgery seems to be the only solution. In recent years, several studies have demonstrated the utility of oxygen-ozone therapy in the treatment of disk herniation, resulting in disk shrinkage. This retrospective study evaluates the outcomes of a series of patients with a history of herniated disks neuroradiologically unchanged in size for over two years, treated with oxygen-ozone therapy at our center over the last 15 years. We treated 96 patients, 84 (87.5%) presenting low back pain complicated or not by chronic sciatica. No drug therapy had yielded significant benefits. A number of specialists had been consulted in two or more years resulting in several neuroradiological scans prior to the decision to undertake oxygen-ozone therapy. Our study documents how ozone therapy for slipped disks "unchanged over time" solved the problem, with disk disruption or a significant reduction in the size of the prolapsed disk material extruded into the spinal canal. © The Author(s) 2016.

  15. Herniated disks unchanged over time: Size reduced after oxygen–ozone therapy

    Science.gov (United States)

    Bonetti, Matteo; Zambello, Alessio; Princiotta, Ciro

    2016-01-01

    The spontaneous regression of disk herniation secondary to dehydration is a much-debated topic in medicine. Some physicians wonder whether surgical removal of the extruded nucleus pulposus is really necessary when the spontaneous disappearance of a herniated lumbar disk is a well-known phenomenon. Unfortunately, without spontaneous regression, chronic pain leads to progressive disability for which surgery seems to be the only solution. In recent years, several studies have demonstrated the utility of oxygen–ozone therapy in the treatment of disk herniation, resulting in disk shrinkage. This retrospective study evaluates the outcomes of a series of patients with a history of herniated disks neuroradiologically unchanged in size for over two years, treated with oxygen–ozone therapy at our center over the last 15 years. We treated 96 patients, 84 (87.5%) presenting low back pain complicated or not by chronic sciatica. No drug therapy had yielded significant benefits. A number of specialists had been consulted in two or more years resulting in several neuroradiological scans prior to the decision to undertake oxygen–ozone therapy. Our study documents how ozone therapy for slipped disks “unchanged over time” solved the problem, with disk disruption or a significant reduction in the size of the prolapsed disk material extruded into the spinal canal. PMID:27066816

  16. Analysis of reduced monoclonal antibodies using size exclusion chromatography coupled with mass spectrometry

    Science.gov (United States)

    Liu, Hongcheng; Gaza-Bulseco, Georgeen; Chumsae, Chris

    2009-12-01

    Size-exclusion chromatography (SEC) has been widely used to detect antibody aggregates, monomer, and fragments. SEC coupled to mass spectrometry has been reported to measure the molecular weights of antibody; antibody conjugates, and antibody light chain and heavy chain. In this study, separation of antibody light chain and heavy chain by SEC and direct coupling to a mass spectrometer was further studied. It was determined that employing mobile phases containing acetonitrile, trifluoroacetic acid, and formic acid allowed the separation of antibody light chain and heavy chain after reduction by SEC. In addition, this mobile phase allowed the coupling of SEC to a mass spectrometer to obtain a direct molecular weight measurement. The application of the SEC-MS method was demonstrated by the separation of the light chain and the heavy chain of multiple recombinant monoclonal antibodies. In addition, separation of a thioether linked light chain and heavy chain from the free light chain and the free heavy chain of a recombinant monoclonal antibody after reduction was also achieved. This optimized method provided a separation of antibody light chain and heavy chain based on size and allowed a direct measurement of molecular weights by mass spectrometry. In addition, this method may help to identify peaks eluting from SEC column directly.

  17. [Sample size for the estimation of F-wave parameters in healthy volunteers and amyotrophic lateral sclerosis patients].

    Science.gov (United States)

    Fang, J; Cui, L Y; Liu, M S; Guan, Y Z; Ding, Q Y; Du, H; Li, B H; Wu, S

    2017-03-07

    Objective: The study aimed to investigate whether sample sizes of F-wave study differed according to different nerves, different F-wave parameters, and amyotrophic lateral sclerosis(ALS) patients or healthy subjects. Methods: The F-waves in the median, ulnar, tibial, and deep peroneal nerves of 55 amyotrophic lateral sclerosis (ALS) patients and 52 healthy subjects were studied to assess the effect of sample size on the accuracy of measurements of the following F-wave parameters: F-wave minimum latency, maximum latency, mean latency, F-wave persistence, F-wave chronodispersion, mean and maximum F-wave amplitude. A hundred stimuli were used in F-wave study. The values obtained from 100 stimuli were considered "true" values and were compared with the corresponding values from smaller samples of 20, 40, 60 and 80 stimuli. F-wave parameters obtained from different sample sizes were compared between the ALS patients and the normal controls. Results: Significant differences were not detected with samples above 60 stimuli for chronodispersion in all four nerves in normal participants. Significant differences were not detected with samples above 40 stimuli for maximum F-wave amplitude in median, ulnar and tibial nerves in normal participants. When comparing ALS patients and normal controls, significant differences were detected in the maximum (median nerve, Z=-3.560, PF-wave latency (median nerve, Z=-3.243, PF-wave chronodispersion (Z=-3.152, PF-wave persistence in the median (Z=6.139, PF-wave amplitude in the tibial nerve(t=2.981, PF-wave amplitude in the ulnar (Z=-2.134, PF-wave persistence in tibial nerve (Z=2.119, PF-wave amplitude in ulnar (Z=-2.552, PF-wave amplitude in peroneal nerve (t=2.693, PF-wave study differed according to different nerves, different F-wave parameters , and ALS patients or healthy subjects.

  18. Sex determination by tooth size in a sample of Greek population.

    Science.gov (United States)

    Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C

    2014-08-01

    Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.

  19. Applicability of submerged jet model to describe the liquid sample load into measuring chamber of micron and submillimeter sizes

    Science.gov (United States)

    Bulyanitsa, A. L.; Belousov, K. I.; Evstrapov, A. A.

    2017-11-01

    The load of a liquid sample into a measuring chamber is one of the stages of substance analysis in modern devices. Fluid flow is effectively calculated by numerical simulation using application packages, for example, COMSOL MULTIPHYSICS. In the same time it is often desirable to have an approximate analytical solution. The applicability of a submerged jet model for simulation the liquid sample load is considered for the chamber with sizes from hundreds micrometers to several millimeters. The paper examines the extent to which the introduction of amendments to the jet cutting and its replacement with an energy equivalent jet provide acceptable accuracy for evaluation of the loading process dynamics.

  20. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

    Science.gov (United States)

    Fraley, R. Chris; Vazire, Simine

    2014-01-01

    The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

  1. The influence of sampling unit size and spatial arrangement patterns on neighborhood-based spatial structure analyses of forest stands

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.; Zhang, G.; Hui, G.; Li, Y.; Hu, Y.; Zhao, Z.

    2016-07-01

    Aim of study: Neighborhood-based stand spatial structure parameters can quantify and characterize forest spatial structure effectively. How these neighborhood-based structure parameters are influenced by the selection of different numbers of nearest-neighbor trees is unclear, and there is some disagreement in the literature regarding the appropriate number of nearest-neighbor trees to sample around reference trees. Understanding how to efficiently characterize forest structure is critical for forest management. Area of study: Multi-species uneven-aged forests of Northern China. Material and methods: We simulated stands with different spatial structural characteristics and systematically compared their structure parameters when two to eight neighboring trees were selected. Main results: Results showed that values of uniform angle index calculated in the same stand were different with different sizes of structure unit. When tree species and sizes were completely randomly interspersed, different numbers of neighbors had little influence on mingling and dominance indices. Changes of mingling or dominance indices caused by different numbers of neighbors occurred when the tree species or size classes were not randomly interspersed and their changing characteristics can be detected according to the spatial arrangement patterns of tree species and sizes. Research highlights: The number of neighboring trees selected for analyzing stand spatial structure parameters should be fixed. We proposed that the four-tree structure unit is the best compromise between sampling accuracy and costs for practical forest management. (Author)

  2. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  3. Determining optimal sample sizes for multistage adaptive randomized clinical trials from an industry perspective using value of information methods.

    Science.gov (United States)

    Chen, Maggie H; Willan, Andrew R

    2013-02-01

    Most often, sample size determinations for randomized clinical trials are based on frequentist approaches that depend on somewhat arbitrarily chosen factors, such as type I and II error probabilities and the smallest clinically important difference. As an alternative, many authors have proposed decision-theoretic (full Bayesian) approaches, often referred to as value of information methods that attempt to determine the sample size that maximizes the difference between the trial's expected utility and its expected cost, referred to as the expected net gain. Taking an industry perspective, Willan proposes a solution in which the trial's utility is the increase in expected profit. Furthermore, Willan and Kowgier, taking a societal perspective, show that multistage designs can increase expected net gain. The purpose of this article is to determine the optimal sample size using value of information methods for industry-based, multistage adaptive randomized clinical trials, and to demonstrate the increase in expected net gain realized. At the end of each stage, the trial's sponsor must decide between three actions: continue to the next stage, stop the trial and seek regulatory approval, or stop the trial and abandon the drug. A model for expected total profit is proposed that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, and the relationship between trial results and probability of regulatory approval. The proposed method is extended to include multistage designs with a solution provided for a two-stage design. An example is given. Significant increases in the expected net gain are realized by using multistage designs. The complexity of the solutions increases with the number of stages, although far simpler near-optimal solutions exist. The method relies on the central limit theorem, assuming that the sample size is sufficiently large so that the relevant statistics are normally distributed. From a value of

  4. RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes

    Directory of Open Access Journals (Sweden)

    Danny J. Kelly

    2005-01-01

    Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.

  5. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  6. Distribution of human waste samples in relation to sizing waste processing in space

    Science.gov (United States)

    Parker, Dick; Gallagher, S. K.

    1992-01-01

    Human waste processing for closed ecological life support systems (CELSS) in space requires that there be an accurate knowledge of the quantity of wastes produced. Because initial CELSS will be handling relatively few individuals, it is important to know the variation that exists in the production of wastes rather than relying upon mean values that could result in undersizing equipment for a specific crew. On the other hand, because of the costs of orbiting equipment, it is important to design the equipment with a minimum of excess capacity because of the weight that extra capacity represents. A considerable quantity of information that had been independently gathered on waste production was examined in order to obtain estimates of equipment sizing requirements for handling waste loads from crews of 2 to 20 individuals. The recommended design for a crew of 8 should hold 34.5 liters per day (4315 ml/person/day) for urine and stool water and a little more than 1.25 kg per day (154 g/person/day) of human waste solids and sanitary supplies.

  7. One size does not fit all: evaluating an intervention to reduce antibiotic prescribing for acute bronchitis.

    Science.gov (United States)

    Ackerman, Sara L; Gonzales, Ralph; Stahl, Melissa S; Metlay, Joshua P

    2013-11-04

    Overuse of antibiotics for upper respiratory tract infections (URIs) and acute bronchitis is a persistent and vexing problem. In the U.S., more than half of all patients with upper respiratory tract infections and acute bronchitis are treated with antibiotics annually, despite the fact that most cases are viral in etiology and are not responsive to antibiotics. Interventions aiming to reduce unnecessary antibiotic prescribing have had mixed results, and successes have been modest. The objective of this evaluation is to use mixed methods to understand why a multi-level intervention to reduce antibiotic prescribing for acute bronchitis among primary care providers resulted in measurable improvement in only one third of participating clinicians. Clinician perspectives on print-based and electronic intervention strategies, and antibiotic prescribing more generally, were elicited through structured telephone surveys at high and low performing sites after the first year of intervention at the Geisinger Health System in Pennsylvania (n = 29). Compared with a survey on antibiotic use conducted 10 years earlier, clinicians demonstrated greater awareness of antibiotic resistance and how it is impacted by individual prescribing decisions-including their own. However, persistent perceived barriers to reducing prescribing included patient expectations, time pressure, and diagnostic uncertainty, and these factors were reported as differentially undermining specific intervention components' effectiveness. An exam room poster depicting a diagnostic algorithm was the most popular strategy. Future efforts to reduce antibiotic prescribing should address multi-level barriers identified by clinicians and tailor strategies to differences at individual clinician and group practice levels, focusing in particular on changing how patients and providers make decisions together about antibiotic use.

  8. Face inversion and acquired prosopagnosia reduce the size of the perceptual field of view.

    Science.gov (United States)

    Van Belle, Goedele; Lefèvre, Philippe; Rossion, Bruno

    2015-03-01

    Using a gaze-contingent morphing approach, we asked human observers to choose one of two faces that best matched the identity of a target face: one face corresponded to the reference face's fixated part only (e.g., one eye), the other corresponded to the unfixated area of the reference face. The face corresponding to the fixated part was selected significantly more frequently in the inverted than in the upright orientation. This observation provides evidence that face inversion reduces an observer's perceptual field of view, even when both upright and inverted faces are displayed at full view and there is no performance difference between these conditions. It rules out an account of the drop of performance for inverted faces--one of the most robust effects in experimental psychology--in terms of a mere difference in local processing efficiency. A brain-damaged patient with pure prosopagnosia, viewing only upright faces, systematically selected the face corresponding to the fixated part, as if her perceptual field was reduced relative to normal observers. Altogether, these observations indicate that the absence of visual knowledge reduces the perceptual field of view, supporting an indirect view of visual perception. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  10. Dealing with large sample sizes: comparison of a new one spot dot blot method to western blot.

    Science.gov (United States)

    Putra, Sulistyo Emantoko Dwi; Tsuprykov, Oleg; Von Websky, Karoline; Ritter, Teresa; Reichetzeder, Christoph; Hocher, Berthold

    2014-01-01

    Western blot is the gold standard method to determine individual protein expression levels. However, western blot is technically difficult to perform in large sample sizes because it is a time consuming and labor intensive process. Dot blot is often used instead when dealing with large sample sizes, but the main disadvantage of the existing dot blot techniques, is the absence of signal normalization to a housekeeping protein. In this study we established a one dot two development signals (ODTDS) dot blot method employing two different signal development systems. The first signal from the protein of interest was detected by horseradish peroxidase (HRP). The second signal, detecting the housekeeping protein, was obtained by using alkaline phosphatase (AP). Inter-assay results variations within ODTDS dot blot and western blot and intra-assay variations between both methods were low (1.04-5.71%) as assessed by coefficient of variation. ODTDS dot blot technique can be used instead of western blot when dealing with large sample sizes without a reduction in results accuracy.

  11. A reduced estimate of the number of kilometre-sized near-Earth asteroids.

    Science.gov (United States)

    Rabinowitz, D; Helin, E; Lawrence, K; Pravdo, S

    2000-01-13

    Near-Earth asteroids are small (diameters 1 km has been estimated to be in the range 1,000-2,000, which translates to an approximately 1% chance of a catastrophic collision with the Earth in the next millennium. These numbers are, however, poorly constrained because of the limitations of previous searches using photographic plates. (One kilometre is below the size of a body whose impact on the Earth would produce global effects.) Here we report an analysis of our survey for near-Earth asteroids that uses improved detection technologies. We find that the total number of asteroids with diameters > 1 km is about half the earlier estimates. At the current rate of discovery of near-Earth asteroids, 90% will probably have been detected within the next 20 years.

  12. Reducing Size, Weight, and Power (SWaP) of Perception Systems in Small Autonomous Aerial Systems

    Science.gov (United States)

    Jones, Kennie H.; Gross, Jason

    2014-01-01

    The objectives are to examine recent trends in the reduction of size, weight, and power (SWaP) requirements of sensor systems for environmental perception and to explore new technology that may overcome limitations in current systems. Improving perception systems to facilitate situation awareness is critical in the move to introduce increasing autonomy in aerial systems. Whether the autonomy is in the current state-of-the-art of increasing automation or is enabling cognitive decisions that facilitate adaptive behavior, collection of environmental information and fusion of that information into knowledge that can direct actuation is imperative to decisions resulting in appropriate behavior. Artificial sensory systems such as cameras, radar, LIDAR, and acoustic sensors have been in use on aircraft for many years but, due to the large size and weight of the airplane and electrical power made available through powerful engines, the SWaP requirements of these sensors was inconsequential. With the proliferation of Remote Piloted Vehicles (RPV), the trend is in significant reduction in SWaP of the vehicles. This requires at least an equivalent reduction in SWaP for the sensory systems. A survey of some currently available sensor systems and changing technology will reveal the trend toward reduction of SWaP of these systems and will predict future reductions. A new technology will be introduced that provides an example of a desirable new trend. A new device replaces multiple conventional sensory devices facilitating synchronization, localization, altimetry, collision avoidance, terrain mapping, and data communication in a single integrated, small form-factor, extremely lightweight, and low power device that it is practical for integration into small autonomous vehicles and can facilitate cooperative behavior. The technology is based on Ultra WideBand (UWB) radio using short pulses of energy rather than continuous sine waves. The characteristics of UWB yield several

  13. Individual thorax geometry reduces position and size differences in reconstructed images of electrical impedance tomography.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-01-01

    Due to the ill-posed problem, the electrical impedance within the thorax cannot be exactly reconstructed. The aim of our study was to prove that reconstruction with individual thorax geometry improved the quality of EIT (electrical impedance tomography) images. Seven mechanically ventilated patients with acute respiratory distress syndrome were examined by EIT. The thorax contours were determined from routine computed tomography (CT) images based on automatic threshold filtering. EIT raw data was reconstructed offline with (1) back-projection with circular forward model; (2) GREIT reconstruction method with circular forward model and (3) GREIT with individual thorax geometry. The resulting EIT images were compared to rescaled CT images. The distance between the lung contour and the thorax contour was calculated for each method and the differences to that in CT were denoted as position differences. Shape differences was defined as the ratio of thorax (or lungs) size in EIT and that in rescaled CT. Method (3) has the smallest position differences (6.6 ± 2.8, 5.3 ± 3.3, 2.3 ± 1.4 in pixel, for each reconstruction method respectively; mean ± SD). The thorax and lungs sizes in the transformed CT images were 514 ± 73 and 177 ± 39. Shape differences of thorax were 1.81 ± 0.26, 1.81 ± 0.26, 1.10 ± 0.12 and that of lungs were 1.69 ± 0.45, 1.52 ± 0.45, 1.34 ± 0.35 for each method respectively. The reconstructed images using the GREIT method with individual thorax geometry were more realistic. Improvement of EIT image quality may foster the acceptance of EIT in routine clinical use.

  14. Intermittent exposure to reduced oxygen levels affects prey size selection and consumption in swimming crab Thalamita danae Stimpson.

    Science.gov (United States)

    Shin, P K S; Cheung, P H; Yang, F Y; Cheung, S G

    2005-01-01

    Portunid crabs Thalamita danae (carapace width: 46-56 mm) were exposed to low oxygen level (4.0 mg O2 l(-1)) and hypoxia (1.5 mg O2 l(-1)) for 6 h each day with three size classes (large: 15.0-19.9 mm, medium: 10.0-14.9 mm, small: 5.0-9.9 mm) of mussels Brachidontes variabilis offered as food. Consumption rate, prey size preference, and prey handling including breaking time, handling time, eating time and prey value, were studied during the time the crabs were exposed to reduced oxygen levels and results were compared with the crabs maintained at high oxygen level (8.0 mg O2 l(-1)) throughout the experiment. Consumption of mussels from all size classes was significantly higher at high oxygen level than at reduced oxygen levels. No mussel size preference was observed for crabs exposed to 4.0 or 8.0 mg O2 l(-1) but those crabs exposed to 1.5 mg O2 l(-1) preferred medium mussels. Both breaking time and handling time increased with mussel size but did not vary with oxygen level. Prey value of each mussel consumed (mg dry wt eaten crab(-1) s(-1)) was calculated by dividing the estimated dry weight of the mussel by the observed handling time. Mean prey value varied significantly with mussel size, with values obtained for large mussels being higher than small mussels at 4.0 and 8.0 mg O2 l(-1); the effect of oxygen level, however, was insignificant. In view of portunid crabs as major predators of mussels, results may help explain dominance of mussels in eutrophic harbours in Hong Kong.

  15. Reduced sampling efficiency causes degraded Vernier hyperacuity with normal aging: Vernier acuity in position noise.

    Science.gov (United States)

    Li, Roger W; Brown, Brian; Edwards, Marion H; Ngo, Charlie V; Chat, Sandy W; Levi, Dennis M

    2012-01-01

    Vernier acuity, a form of visual hyperacuity, is amongst the most precise forms of spatial vision. Under optimal conditions Vernier thresholds are much finer than the inter-photoreceptor distance. Achievement of such high precision is based substantially on cortical computations, most likely in the primary visual cortex. Using stimuli with added positional noise, we show that Vernier processing is reduced with advancing age across a wide range of noise levels. Using an ideal observer model, we are able to characterize the mechanisms underlying age-related loss, and show that the reduction in Vernier acuity can be mainly attributed to the reduction in efficiency of sampling, with no significant change in the level of internal position noise, or spatial distortion, in the visual system.

  16. Comparing two psychological interventions in reducing impulsive processes of eating behaviour: effects on self-selected portion size.

    Science.gov (United States)

    van Koningsbruggen, Guido M; Veling, Harm; Stroebe, Wolfgang; Aarts, Henk

    2014-11-01

    Palatable food, such as sweets, contains properties that automatically trigger the impulse to consume it even when people have goals or intentions to refrain from consuming such food. We compared the effectiveness of two interventions in reducing the portion size of palatable food that people select for themselves. Specifically, the use of dieting implementation intentions that reduce behaviour towards palatable food via top-down implementation of a dieting goal was pitted against a stop-signal training that changes the impulse-evoking quality of palatable food from bottom-up. We compared the two interventions using a 2 × 2 factorial design. Participants completed a stop-signal training in which they learned to withhold a behavioural response upon presentation of tempting sweets (vs. control condition) and formed implementation intentions to diet (vs. control condition). Selected portion size was measured in a sweet-shop-like environment (Experiment 1) and through a computerized snack dispenser (Experiment 2). Both interventions reduced the amount of sweets selected in the sweet shop environment (Experiment 1) and the snack dispenser (Experiment 2). On average, participants receiving an intervention selected 36% (Experiment 1) and 51% (Experiment 2) fewer sweets than control participants. In both studies, combining the interventions did not lead to additive effects: Employing one of the interventions appears to successfully eliminate instrumental behaviour towards tempting food, making the other intervention redundant. Both interventions reduce self-selected portion size, which is considered a major contributor to the current obesity epidemic. What is already known on this subject? Exposure to temptations, such as unhealthy palatable food, often frustrates people's attainment of long-term health goals. Current approaches to self-control suggest that this is partly because temptations automatically trigger impulsive or hedonic processes that override the

  17. Endoluminal gingival fibroblast transfer reduces the size of rabbit carotid aneurisms via elastin repair.

    Science.gov (United States)

    Durand, Eric; Fournier, Benjamin; Couty, Ludovic; Lemitre, Mathilde; Achouh, Paul; Julia, Pierre; Trinquart, Ludovic; Fabiani, Jean Noel; Seguier, Sylvie; Gogly, Bruno; Coulomb, Bernard; Lafont, Antoine

    2012-08-01

    Matrix metalloproteinase-9 is considered to play a pivotal role in aneurismal formation. We showed that gingival fibroblasts (GF) in vitro reduced matrix metalloproteinase-9 activity via increased secretion of tissue inhibitor of metalloproteinase 1. We aimed to evaluate in vivo the efficacy of GF transplantation to reduce aneurism development in a rabbit model. Seventy rabbit carotid aneurisms were induced by elastase infusion. Four weeks later, GF, dermal fibroblast, or culture medium (DMEM) were infused into established aneurisms. Viable GF were abundantly detected in the transplanted arteries 3 months after seeding. GF engraftment resulted in a significant reduction of carotid aneurisms (decrease of 23.3% [P<0.001] and 17.6% [P=0.01] of vessel diameter in GF-treated arteries, 1 and 3 months after cell therapy, respectively), whereas vessel diameter of control DMEM and dermal fibroblast-treated arteries increased. GF inhibited matrix metalloproteinase-9 activity by tissue inhibitor of metalloproteinase 1 overexpression and matrix metalloproteinase-9/tissue inhibitor of metalloproteinase 1 complex formation, induced elastin repair, and increased elastin density in the media compared with DMEM-treated arteries (38.2 versus 18.0%; P=0.02). Elastin network GF-induced repair was inhibited by tissue inhibitor of metalloproteinase 1 blocking peptide. Our results demonstrate that GF transplantation results in significant aneurism reduction and elastin repair. This strategy may be attractive because GF are accessible and remain viable within the grafted tissue.

  18. Overexpression of the Arabidopsis gai gene in apple significantly reduces plant size.

    Science.gov (United States)

    Zhu, L H; Li, X Y; Welander, M

    2008-02-01

    Genetic engineering is an attractive method to obtain dwarf plants in order to eliminate the extensive use of growth retardants in horticultural crop production. In this study, we evaluated the potential of using the Arabidopsis gai (gibberellic acid insensitive) gene to dwarf apple trees. The gai gene under 35S promoter was introduced in the apple rootstock A2 and the cultivars Gravenstein and McIntosh through Agrobacterium-mediated transformation. One transgenic clone was recovered for Gravenstein and McIntosh, and several transgenic clones for A2, confirmed by Southern blot analysis. Two weak bands were detected by Southern blot analysis in all the untransformed controls, possibly indicating the existence of the internal GAI gene in apple. Most of the transgenic plants showed reduced growth in vitro. Growth analyses in the greenhouse showed a clear reduction in stem length, internode length and node number for the dwarf clones. The normal phenotype of some transgenic clones appears to be associated with silencing of the introduced gai gene, confirmed by RT-PCR analysis. In general, transgenic clones showed reduced rooting ability, especially for the extremely compact ones.

  19. Can corrective information reduce negative appraisals of intrusive thoughts in a community sample?

    Science.gov (United States)

    Rees, Clare S; Austen, Tomas; Anderson, Rebecca A; Egan, Sarah J

    2014-07-01

    Improving mental health literacy in the general population is important as it is associated with early detection and treatment-seeking for mental health problems. Target areas for mental health literacy programs should be guided by research that tests the impact of improving knowledge of psychological constructs associated with the development of mental health problems. This study investigated the impact of providing corrective information about the nature of intrusive thoughts on their subsequent appraisal in a community sample. In an online, experimental design, 148 community participants completed measures of obsessive-compulsive symptoms and appraisals (Obsessive Compulsive Inventory-Revised [OCI-R]; Intrusions Inventory [III]). Individuals were instructed to read either a brief informational text about the nature of intrusive thoughts or a control text. All participants then completed post-test measurements of appraisals. Intervention effectiveness was analysed using hierarchical multiple regression. Individuals in the intervention group reported significantly lower levels of maladaptive appraisals than those in the control group (α = .05). The results of this study support the efficacy of provision of brief written information in reducing negative appraisals of intrusive thoughts in a community sample. It suggests a possible role for education about intrusive thoughts as a prevention strategy for obsessive-compulsive disorder.

  20. Effects of dislocation density and sample-size on plastic yielding at the nanoscale: a Weibull-like framework.

    Science.gov (United States)

    Rinaldi, Antonio

    2011-11-01

    Micro-compression tests have demonstrated that plastic yielding in nanoscale pillars is the result of the fine interplay between the sample-size (chiefly the diameter D) and the density of bulk dislocations ρ. The power-law scaling typical of the nanoscale stems from a source-limited regime, which depends on both these sample parameters. Based on the experimental and theoretical results available in the literature, this paper offers a perspective about the joint effect of D and ρ on the yield stress in any plastic regime, promoting also a schematic graphical map of it. In the sample-size dependent regime, such dependence is cast mathematically into a first order Weibull-type theory, where the power-law scaling the power exponent β and the modulus m of an approximate (unimodal) Weibull distribution of source-strengths can be related by a simple inverse proportionality. As a corollary, the scaling exponent β may not be a universal number, as speculated in the literature. In this context, the discussion opens the alternative possibility of more general (multimodal) source-strength distributions, which could produce more complex and realistic strengthening patterns than the single power-law usually assumed. The paper re-examines our own experimental data, as well as results of Bei et al. (2008) on Mo-alloy pillars, especially for the sake of emphasizing the significance of a sudden increase in sample response scatter as a warning signal of an incipient source-limited regime.

  1. The design of high-temperature thermal conductivity measurements apparatus for thin sample size

    Directory of Open Access Journals (Sweden)

    Hadi Syamsul

    2017-01-01

    Full Text Available This study presents the designing, constructing and validating processes of thermal conductivity apparatus using steady-state heat-transfer techniques with the capability of testing a material at high temperatures. This design is an improvement from ASTM D5470 standard where meter-bars with the equal cross-sectional area were used to extrapolate surface temperature and measure heat transfer across a sample. There were two meter-bars in apparatus where each was placed three thermocouples. This Apparatus using a heater with a power of 1,000 watts, and cooling water to stable condition. The pressure applied was 3.4 MPa at the cross-sectional area of 113.09 mm2 meter-bar and thermal grease to minimized interfacial thermal contact resistance. To determine the performance, the validating process proceeded by comparing the results with thermal conductivity obtained by THB 500 made by LINSEIS. The tests showed the thermal conductivity of the stainless steel and bronze are 15.28 Wm-1K-1 and 38.01 Wm-1K-1 with a difference of test apparatus THB 500 are −2.55% and 2.49%. Furthermore, this apparatus has the capability to measure the thermal conductivity of the material to a temperature of 400°C where the results for the thermal conductivity of stainless steel is 19.21 Wm-1K-1 and the difference was 7.93%.

  2. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    Science.gov (United States)

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  3. Sample size affects 13C-18O clumping in CO2 derived from phosphoric acid digestion of carbonates

    Science.gov (United States)

    Wacker, U.; Fiebig, J.

    2011-12-01

    In the recent past, clumped isotope analysis of carbonates has become an important tool for terrestrial and marine paleoclimate reconstructions. For this purpose, 47/44 ratios of CO2 derived from phosphoric acid digestion of carbonates are measured. These values are compared to the corresponding stochastic 47/44 distribution ratios computed from determined δ13C and δ18O values, with the deviation being finally expressed as Δ47. For carbonates precipitated in equilibrium with its parental water, the magnitude of Δ47 is a function of temperature only. This technique bases on the fact that the isotopic fractionation associated with phosphoric acid digestion of carbonates is kinetically controlled. In this way, the concentration of 13C-18O bondings in the evolved CO2 remains proportional to the number of corresponding bondings inside the carbonate lattice. A relationship between carbonate growth temperature and Δ47 has recently been determined experimentally by Ghosh et al. (2006)1, who performed the carbonate digestion with 103% H3PO4 at 25°C after precipitating the carbonates inorganically at temperatures ranging from 1-50°C. In order to investigate the kinetic parameters associated with the phosphoric acid digestion reaction at 25°C, we have analyzed several natural carbonates at varying sample sizes. Amongst these are NBS 19, internal Carrara marbel, Arctica islandica and cold seep carbonates. Sample size was varied between 4 and 12mg. All samples exhibit a systematic trend to increasing Δ47 values with decreasing sample size, with absolute variations being restricted to ≤0.10%. Additional tests imply that this effect is related to the phosphoric acid digestion reaction. Most presumably, either the kinetic fractionation factor expressing the differences in 47/44 ratios between evolved CO2 and parental carbonate slightly depends on the concentration of the digested carbonate or traces of water exchange with C-O-bearing species inside the acid, similar to

  4. Assessment of minimum sample sizes required to adequately represent diversity reveals inadequacies in datasets of domestic dog mitochondrial DNA.

    Science.gov (United States)

    Webb, Kristen; Allard, Marc

    2010-02-01

    Evolutionary and forensic studies commonly choose the mitochondrial control region as the locus for which to evaluate the domestic dog. However, the number of dogs that need to be sampled in order to represent the control region variation present in the worldwide population is yet to be determined. Following the methods of Pereira et al. (2004), we have demonstrated the importance of surveying the complete control region rather than only the popular left domain. We have also evaluated sample saturation in terms of the haplotype number and the number of polymorphisms within the control region. Of the most commonly cited evolutionary research, only a single study has adequately surveyed the domestic dog population, while all forensic studies have failed to meet the minimum values. We recommend that future studies consider dataset size when designing experiments and ideally sample both domains of the control region in an appropriate number of domestic dogs.

  5. How taxonomic diversity, community structure, and sample size determine the reliability of higher taxon surrogates.

    Science.gov (United States)

    Neeson, Thomas M; Van Rijn, Itai; Mandelik, Yael

    2013-07-01

    Ecologists and paleontologists often rely on higher taxon surrogates instead of complete inventories of biological diversity. Despite their intrinsic appeal, the performance of these surrogates has been markedly inconsistent across empirical studies, to the extent that there is no consensus on appropriate taxonomic resolution (i.e., whether genus- or family-level categories are more appropriate) or their overall usefulness. A framework linking the reliability of higher taxon surrogates to biogeographic setting would allow for the interpretation of previously published work and provide some needed guidance regarding the actual application of these surrogates in biodiversity assessments, conservation planning, and the interpretation of the fossil record. We developed a mathematical model to show how taxonomic diversity, community structure, and sampling effort together affect three measures of higher taxon performance: the correlation between species and higher taxon richness, the relative shapes and asymptotes of species and higher taxon accumulation curves, and the efficiency of higher taxa in a complementarity-based reserve-selection algorithm. In our model, higher taxon surrogates performed well in communities in which a few common species were most abundant, and less well in communities with many equally abundant species. Furthermore, higher taxon surrogates performed well when there was a small mean and variance in the number of species per higher taxa. We also show that empirically measured species-higher-taxon correlations can be partly spurious (i.e., a mathematical artifact), except when the species accumulation curve has reached an asymptote. This particular result is of considerable practical interest given the widespread use of rapid survey methods in biodiversity assessment and the application of higher taxon methods to taxa in which species accumulation curves rarely reach an asymptote, e.g., insects.

  6. Body size and human energy requirements: reduced mass-specific resting energy expenditure in tall adults.

    Science.gov (United States)

    Heymsfield, Steven B; Childers, Douglas; Beetsch, Joel; Allison, David B; Pietrobelli, Angelo

    2007-11-01

    Two observations favor the presence of a lower mass-specific resting energy expenditure (REE/weight) in taller adult humans: an earlier report of height (H)-related differences in relative body composition; and a combined model based on Quetelet and Kleiber's classic equations suggesting that REE/weight proportional, variantH(-0.5). This study tested the hypothesis stating that mass-specific REE scales negatively to height with a secondary aim exploration of related associations between height, weight (W), surface area (SA), and REE. Two independent data sets (n = 344 and 884) were evaluated, both with REE measured by indirect calorimetry and the smaller of the two including fat estimates by dual-energy X-ray absorptiometry. Results support Quetelet's equation (W proportional, variantH(2)), but Kleiber's equation approached the interspecific mammal form (REE proportional, variantW(0.75)) only after adding adiposity measures to weight and age as REE predictors. REE/weight scaled as H( approximately (-0.5)) in support of the hypothesis with P values ranging from 0.17 to <0.001. REE and SA both scaled as H( approximately 1.5), and REE/SA was nonsignificantly correlated with height in all groups. These observations suggest that adiposity needs to be considered when evaluating the intraspecific scaling of REE to weight; that relative to their weight, taller subjects require a lower energy intake for replacing resting heat losses than shorter subjects; that fasting endurance, approximated as fat mass/REE, increases as H(0.5); and that thermal balance is maintained independent of stature by evident stable associations between resting heat production and capacity of external heat release. These observations have implications for the modeling of adult human energy requirements and associate with anthropological concepts founded on body size.

  7. Effective size of a wild salmonid population is greatly reduced by hatchery supplementation.

    Science.gov (United States)

    Christie, M R; Marine, M L; French, R A; Waples, R S; Blouin, M S

    2012-10-01

    Many declining and commercially important populations are supplemented with captive-born individuals that are intentionally released into the wild. These supplementation programs often create large numbers of offspring from relatively few breeding adults, which can have substantial population-level effects. We examined the genetic effects of supplementation on a wild population of steelhead (Oncorhynchus mykiss) from the Hood River, Oregon, by matching 12 run-years of hatchery steelhead back to their broodstock parents. We show that the effective number of breeders producing the hatchery fish (broodstock parents; N(b)) was quite small (harmonic mean N(b)=25 fish per brood-year vs 373 for wild fish), and was exacerbated by a high variance in broodstock reproductive success among individuals within years. The low N(b) caused hatchery fish to have decreased allelic richness, increased average relatedness, more loci in linkage disequilibrium and substantial levels of genetic drift in comparison with their wild-born counterparts. We also documented a substantial Ryman-Laikre effect whereby the additional hatchery fish doubled the total number of adult fish on the spawning grounds each year, but cut the effective population size of the total population (wild and hatchery fish combined) by nearly two-thirds. We further demonstrate that the Ryman-Laikre effect is most severe in this population when (1) >10% of fish allowed onto spawning grounds are from hatcheries and (2) the hatchery fish have high reproductive success in the wild. These results emphasize the trade-offs that arise when supplementation programs attempt to balance disparate goals (increasing production while maintaining genetic diversity and fitness).

  8. Growth regulators in reducing the size of orchid Fire-of-Star for commercialization in vase

    Directory of Open Access Journals (Sweden)

    Patricia Reiners Carvalho

    2016-05-01

    Full Text Available Fire-of-star (Epidendrum radicans Pav. ex Lindl. is a terrestrial orchid, native to Brazil, tussocks with leafy stems, always with many adventitious roots, releasing its long inflorescence with about 1.0 m from the apex of the stem, showing great potential in floriculture, but long flowering stem complicates their marketing vase. The objective of this study was to evaluate the effect of paclobutrazol (PBZ and mepiquat chloride (CLM the reduction of the size of the orchid E. radicans. Plants with an average height of 15 cm were cultivated in a greenhouse with 50% shading. The growth regulators used were PBZ at doses of 0; 5; 10; 15 and 20 mg L-1, and the CLM at doses of 0; 1; 2; 3; 4 and 5 mg L-1. The frequency of application was fortnightly, totaling ten applications. The experiment was installed on a randomized complete blocks, one block to the PBZ with 5 treatments and 10 replications and another block to the CLM, with 6 treatments and 10 replications. Data were submitted to analysis of variance at 5% probability and significance when seen performed regression analysis. The variables evaluated were number shoots, plant height (cm, number of flower stems and leaf area. The results indicated that E. radicans treated with 5 mg L-1 PBZ were 50% lower in height than the control plants. When CLM treated with a dose of 1 mg L-1 plants were 25% lower in height than the control plants, maintaining its aesthetic characteristics suitable for marketing in vases. Growth regulators in the applied doses did not affect the number of shoots and flower stems. PBZ treated plants had 50% of their leaf area compared to control while those treated with CLM doses remained with the same average leaf area of control.

  9. Prediction errors in learning drug response from gene expression data - influence of labeling, sample size, and machine learning algorithm.

    Directory of Open Access Journals (Sweden)

    Immanuel Bayer

    Full Text Available Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50 of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment.

  10. Use of Homogeneously-Sized Carbon Steel Ball Bearings to Study Microbially-Influenced Corrosion in Oil Field Samples.

    Science.gov (United States)

    Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife

    2016-01-01

    Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (10(5)/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (10(6)/ml) and SRB (10(8)/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows

  11. Influence of pH, Temperature and Sample Size on Natural and Enforced Syneresis of Precipitated Silica

    Directory of Open Access Journals (Sweden)

    Sebastian Wilhelm

    2015-12-01

    Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.

  12. Soluble epoxide hydrolase gene deletion improves blood flow and reduces infarct size after cerebral ischemia in reproductively senescent female mice

    Directory of Open Access Journals (Sweden)

    Kristen L Zuloaga

    2015-01-01

    Full Text Available Soluble epoxide hydrolase (sEH, a key enzyme in the metabolism of vasodilatory epoxyeicosatrienoic acids (EETs, is sexually dimorphic, suppressed by estrogen, and contributes to underlying sex differences in cerebral blood flow and injury after cerebral ischemia. We tested the hypothesis that sEH inhibition or gene deletion in reproductively senescent (RS female mice would increase cerebral perfusion and decrease infarct size following stroke. RS (15-18 month old and young (3-4 month old female sEH knockout (sEHKO mice and wild type (WT mice were subjected to 45 min middle cerebral artery occlusion (MCAO with laser Doppler perfusion monitoring. WT mice were treated with vehicle or a sEH inhibitor t-AUCB at the time of reperfusion and every 24hrs thereafter for 3 days. Differences in regional cerebral blood flow were measured in vivo using optical microangiography. Infarct size was measured 3 days after reperfusion. Infarct size and cerebral perfusion 24h after MCAO were not altered by age. Both sEH gene deletion and sEH inhibition increased cortical perfusion 24h after MCAO. Neither sEH gene deletion nor sEH inhibition reduced infarct size in young mice. However, sEH gene deletion, but not sEH inhibition of the hydrolase domain of the enzyme, decreased infarct size in RS mice. Results of these studies show that sEH gene deletion and sEH inhibition enhance cortical perfusion following MCAO and sEH gene deletion reduces damage after ischemia in RS female mice; however this neuroprotection in absent is young mice.

  13. Acute administration of cannabidiol in vivo suppresses ischaemia-induced cardiac arrhythmias and reduces infarct size when given at reperfusion

    Science.gov (United States)

    Walsh, Sarah K; Hepburn, Claire Y; Kane, Kathleen A; Wainwright, Cherry L

    2010-01-01

    Background and purpose: Cannabidiol (CBD) is a phytocannabinoid, with anti-apoptotic, anti-inflammatory and antioxidant effects and has recently been shown to exert a tissue sparing effect during chronic myocardial ischaemia and reperfusion (I/R). However, it is not known whether CBD is cardioprotective in the acute phase of I/R injury and the present studies tested this hypothesis. Experimental approach: Male Sprague-Dawley rats received either vehicle or CBD (10 or 50 µg·kg−1 i.v.) 10 min before 30 min coronary artery occlusion or CBD (50 µg·kg−1 i.v.) 10 min before reperfusion (2 h). The appearance of ventricular arrhythmias during the ischaemic and immediate post-reperfusion periods were recorded and the hearts excised for infarct size determination and assessment of mast cell degranulation. Arterial blood was withdrawn at the end of the reperfusion period to assess platelet aggregation in response to collagen. Key results: CBD reduced both the total number of ischaemia-induced arrhythmias and infarct size when administered prior to ischaemia, an effect that was dose-dependent. Infarct size was also reduced when CBD was given prior to reperfusion. CBD (50 µg·kg−1 i.v.) given prior to ischaemia, but not at reperfusion, attenuated collagen-induced platelet aggregation compared with control, but had no effect on ischaemia-induced mast cell degranulation. Conclusions and implications: This study demonstrates that CBD is cardioprotective in the acute phase of I/R by both reducing ventricular arrhythmias and attenuating infarct size. The anti-arrhythmic effect, but not the tissue sparing effect, may be mediated through an inhibitory effect on platelet activation. PMID:20590615

  14. Reduced size of the amygdala in individuals with 47,XXY and 47,XXX karyotypes.

    Science.gov (United States)

    Patwardhan, Anil J; Brown, Wendy E; Bender, Bruce G; Linden, Mary G; Eliez, Stephan; Reiss, Allan L

    2002-01-08

    The excess of 47,XXX and 47,XXY karyotypes found in cytogenetic screening studies of individuals with schizophrenia has given support for an increased risk of psychiatric illness among men and women with sex chromosomal aneuploidy (SCA). Mesial temporal lobe structures, including the amygdala and hippocampus, are thought to be associated with abnormalities of mood and behavior in humans and in the neurobiology of schizophrenia. This study focuses on variations in volumes of mesial temporal lobe structures in men and women with SCA. Utilizing an unselected birth cohort of subjects with SCA and high-resolution magnetic resonance imaging (MRI), we investigated the neuroanatomical consequences of a supernumerary X chromosome on the morphology of the amygdala and hippocampus. Regional and total brain volumes were measured in 10 subjects with 47,XXY, 10 subjects with 47,XXX, and 20 euploid controls. Amygdala volumes were significantly reduced in men with 47,XXY, compared to control men, while the decrease in women with 47,XXX was not as pronounced. Hippocampus volumes were preserved in both groups, compared to same-gender controls. Longitudinal studies of SCA individuals have shown an increased incidence of mild psychopathology and behavioral dysfunction in men with 47,XXY and more overt psychiatric illness in women with 47,XXX, compared to control populations. The alteration in amygdala volumes in individuals with a supernumerary X chromosome may provide a neuroanatomic basis for these findings. Copyright 2001 Wiley-Liss, Inc.

  15. Reduced population size does not affect the mating strategy of a vulnerable and endemic seabird

    Science.gov (United States)

    Nava, Cristina; Neves, Verónica C.; Andris, Malvina; Dubois, Marie-Pierre; Jarne, Philippe; Bolton, Mark; Bried, Joël

    2017-12-01

    Bottleneck episodes may occur in small and isolated animal populations, which may result in decreased genetic diversity and increased inbreeding, but also in mating strategy adjustment. This was evaluated in the vulnerable and socially monogamous Monteiro's Storm-petrel Hydrobates monteiroi, a seabird endemic to the Azores archipelago which has suffered a dramatic population decline since the XVth century. To do this, we conducted a genetic study (18 microsatellite markers) in the population from Praia islet, which has been monitored over 16 years. We found no evidence that a genetic bottleneck was associated with this demographic decline. Monteiro's Storm-petrels paired randomly with respect to genetic relatedness and body measurements. Pair fecundity was unrelated to genetic relatedness between partners. We detected only two cases of extra-pair parentage associated with an extra-pair copulation (out of 71 offspring). Unsuccessful pairs were most likely to divorce the next year, but genetic relatedness between pair mates and pair breeding experience did not influence divorce. Divorce enabled individuals to improve their reproductive performances after re-mating only when the new partner was experienced. Re-pairing with an experienced partner occurred more frequently when divorcees changed nest than when they retained their nest. This study shows that even in strongly reduced populations, genetic diversity can be maintained, inbreeding does not necessarily occur, and random pairing is not risky in terms of pair lifetime reproductive success. Given, however, that we found no clear phenotypic mate choice criteria, the part played by non-morphological traits should be assessed more accurately in order to better understand seabird mating strategies.

  16. Peer groups splitting in Croatian EQA scheme: a trade-off between homogeneity and sample size number.

    Science.gov (United States)

    Vlašić Tanasković, Jelena; Coucke, Wim; Leniček Krleža, Jasna; Vuković Rodriguez, Jadranka

    2017-03-01

    Laboratory evaluation through external quality assessment (EQA) schemes is often performed as 'peer group' comparison under the assumption that matrix effects influence the comparisons between results of different methods, for analytes where no commutable materials with reference value assignment are available. With EQA schemes that are not large but have many available instruments and reagent options for same analyte, homogenous peer groups must be created with adequate number of results to enable satisfactory statistical evaluation. We proposed a multivariate analysis of variance (MANOVA)-based test to evaluate heterogeneity of peer groups within the Croatian EQA biochemistry scheme and identify groups where further splitting might improve laboratory evaluation. EQA biochemistry results were divided according to instruments used per analyte and the MANOVA test was used to verify statistically significant differences between subgroups. The number of samples was determined by sample size calculation ensuring a power of 90% and allowing the false flagging rate to increase not more than 5%. When statistically significant differences between subgroups were found, clear improvement of laboratory evaluation was assessed before splitting groups. After evaluating 29 peer groups, we found strong evidence for further splitting of six groups. Overall improvement of 6% reported results were observed, with the percentage being as high as 27.4% for one particular method. Defining maximal allowable differences between subgroups based on flagging rate change, followed by sample size planning and MANOVA, identifies heterogeneous peer groups where further splitting improves laboratory evaluation and enables continuous monitoring for peer group heterogeneity within EQA schemes.

  17. Spatial Distribution and Minimum Sample Size for Overwintering Larvae of the Rice Stem Borer Chilo suppressalis (Walker) in Paddy Fields.

    Science.gov (United States)

    Arbab, A

    2014-10-01

    The rice stem borer, Chilo suppressalis (Walker), feeds almost exclusively in paddy fields in most regions of the world. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling procedures, and adopting precise agricultural techniques. Field experiments were conducted during 2011 and 2012 to estimate the spatial distribution pattern of the overwintering larvae. Data were analyzed using five distribution indices and two regression models (Taylor and Iwao). All of the indices and Taylor's model indicated random spatial distribution pattern of the rice stem borer overwintering larvae. Iwao's patchiness regression was inappropriate for our data as shown by the non-homogeneity of variance, whereas Taylor's power law fitted the data well. The coefficients of Taylor's power law for a combined 2 years of data were a = -0.1118, b = 0.9202 ± 0.02, and r (2) = 96.81. Taylor's power law parameters were used to compute minimum sample size needed to estimate populations at three fixed precision levels, 5, 10, and 25% at 0.05 probabilities. Results based on this equation parameters suggesting that minimum sample sizes needed for a precision level of 0.25 were 74 and 20 rice stubble for rice stem borer larvae when the average larvae is near 0.10 and 0.20 larvae per rice stubble, respectively.

  18. Endocranial volume of Australopithecus africanus: new CT-based estimates and the effects of missing data and small sample size.

    Science.gov (United States)

    Neubauer, Simon; Gunz, Philipp; Weber, Gerhard W; Hublin, Jean-Jacques

    2012-04-01

    Estimation of endocranial volume in Australopithecus africanus is important in interpreting early hominin brain evolution. However, the number of individuals available for investigation is limited and most of these fossils are, to some degree, incomplete and/or distorted. Uncertainties of the required reconstruction ('missing data uncertainty') and the small sample size ('small sample uncertainty') both potentially bias estimates of the average and within-group variation of endocranial volume in A. africanus. We used CT scans, electronic preparation (segmentation), mirror-imaging and semilandmark-based geometric morphometrics to generate and reconstruct complete endocasts for Sts 5, Sts 60, Sts 71, StW 505, MLD 37/38, and Taung, and measured their endocranial volumes (EV). To get a sense of the reliability of these new EV estimates, we then used simulations based on samples of chimpanzees and humans to: (a) test the accuracy of our approach, (b) assess missing data uncertainty, and (c) appraise small sample uncertainty. Incorporating missing data uncertainty of the five adult individuals, A. africanus was found to have an average adult endocranial volume of 454-461 ml with a standard deviation of 66-75 ml. EV estimates for the juvenile Taung individual range from 402 to 407 ml. Our simulations show that missing data uncertainty is small given the missing portions of the investigated fossils, but that small sample sizes are problematic for estimating species average EV. It is important to take these uncertainties into account when different fossil groups are being compared. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Cinacalcet effectively reduces parathyroid hormone secretion and gland volume regardless of pretreatment gland size in patients with secondary hyperparathyroidism.

    Science.gov (United States)

    Komaba, Hirotaka; Nakanishi, Shohei; Fujimori, Akira; Tanaka, Motoko; Shin, Jeongsoo; Shibuya, Koji; Nishioka, Masato; Hasegawa, Hirohito; Kurosawa, Takeshi; Fukagawa, Masafumi

    2010-12-01

    Cinacalcet is effective in reducing serum parathyroid hormone (PTH) in patients with secondary hyperparathyroidism. However, it has not been proven whether parathyroid gland size predicts response to therapy and whether cinacalcet is capable of inducing a reduction in parathyroid volume. This 52-week, multicenter, open-label study enrolled hemodialysis patients with moderate to severe secondary hyperparathyroidism (intact PTH >300 pg/ml). Doses of cinacalcet were adjusted between 25 and 100 mg to achieve intact PTH size at baseline, week 26, and week 52. Findings were also compared with those of historical controls. Of the 81 subjects enrolled, 56 had parathyroid glands smaller than 500 mm(3) (group S) and 25 had at least one enlarged gland larger than 500 mm(3) (group L). Treatment with cinacalcet effectively decreased intact PTH by 55% from baseline in group S and by 58% in group L. A slightly greater proportion of patients in group S versus group L achieved an intact PTH 30% reduction from baseline (88 versus 78%), but this was not statistically significant. Cinacalcet therapy also resulted in a significant reduction in parathyroid gland volume regardless of pretreatment size, which was in sharp contrast to historical controls (n = 87) where parathyroid gland volume progressively increased with traditional therapy alone. Cinacalcet effectively decreases serum PTH levels and concomitantly reduces parathyroid gland volume, even in patients with marked parathyroid hyperplasia.

  20. Hypothyroidism Reduces the Size of Ovarian Follicles and Promotes Hypertrophy of Periovarian Fat with Infiltration of Macrophages in Adult Rabbits.

    Science.gov (United States)

    Rodríguez-Castelán, J; Méndez-Tepepa, M; Carrillo-Portillo, Y; Anaya-Hernández, A; Rodríguez-Antolín, J; Zambrano, E; Castelán, F; Cuevas-Romero, E

    2017-01-01

    Ovarian failure is related to dyslipidemias and inflammation, as well as to hypertrophy and dysfunction of the visceral adipose tissue (VAT). Although hypothyroidism has been associated with obesity, dyslipidemias, and inflammation in humans and animals, its influence on the characteristics of ovarian follicles in adulthood is scarcely known. Control and hypothyroid rabbits were used to analyze the ovarian follicles, expression of aromatase in the ovary, serum concentration of lipids, leptin, and uric acid, size of adipocytes, and infiltration of macrophages in the periovarian VAT. Hypothyroidism did not affect the percentage of functional or atretic follicles. However, it reduced the size of primary, secondary, and tertiary follicles considered as large and the expression of aromatase in the ovary. This effect was associated with high serum concentrations of total cholesterol and low-density lipoprotein cholesterol (LDL-C). In addition, hypothyroidism induced hypertrophy of adipocytes and a major infiltration of CD68+ macrophages into the periovarian VAT. Our results suggest that the reduced size of ovarian follicles promoted by hypothyroidism could be associated with dyslipidemias, hypertrophy, and inflammation of the periovarian VAT. Present findings may be useful to understand the influence of hypothyroidism in the ovary function in adulthood.

  1. Hypothyroidism Reduces the Size of Ovarian Follicles and Promotes Hypertrophy of Periovarian Fat with Infiltration of Macrophages in Adult Rabbits

    Directory of Open Access Journals (Sweden)

    J. Rodríguez-Castelán

    2017-01-01

    Full Text Available Ovarian failure is related to dyslipidemias and inflammation, as well as to hypertrophy and dysfunction of the visceral adipose tissue (VAT. Although hypothyroidism has been associated with obesity, dyslipidemias, and inflammation in humans and animals, its influence on the characteristics of ovarian follicles in adulthood is scarcely known. Control and hypothyroid rabbits were used to analyze the ovarian follicles, expression of aromatase in the ovary, serum concentration of lipids, leptin, and uric acid, size of adipocytes, and infiltration of macrophages in the periovarian VAT. Hypothyroidism did not affect the percentage of functional or atretic follicles. However, it reduced the size of primary, secondary, and tertiary follicles considered as large and the expression of aromatase in the ovary. This effect was associated with high serum concentrations of total cholesterol and low-density lipoprotein cholesterol (LDL-C. In addition, hypothyroidism induced hypertrophy of adipocytes and a major infiltration of CD68+ macrophages into the periovarian VAT. Our results suggest that the reduced size of ovarian follicles promoted by hypothyroidism could be associated with dyslipidemias, hypertrophy, and inflammation of the periovarian VAT. Present findings may be useful to understand the influence of hypothyroidism in the ovary function in adulthood.

  2. Hypothyroidism Reduces the Size of Ovarian Follicles and Promotes Hypertrophy of Periovarian Fat with Infiltration of Macrophages in Adult Rabbits

    Science.gov (United States)

    Rodríguez-Castelán, J.; Méndez-Tepepa, M.; Carrillo-Portillo, Y.; Anaya-Hernández, A.; Zambrano, E.

    2017-01-01

    Ovarian failure is related to dyslipidemias and inflammation, as well as to hypertrophy and dysfunction of the visceral adipose tissue (VAT). Although hypothyroidism has been associated with obesity, dyslipidemias, and inflammation in humans and animals, its influence on the characteristics of ovarian follicles in adulthood is scarcely known. Control and hypothyroid rabbits were used to analyze the ovarian follicles, expression of aromatase in the ovary, serum concentration of lipids, leptin, and uric acid, size of adipocytes, and infiltration of macrophages in the periovarian VAT. Hypothyroidism did not affect the percentage of functional or atretic follicles. However, it reduced the size of primary, secondary, and tertiary follicles considered as large and the expression of aromatase in the ovary. This effect was associated with high serum concentrations of total cholesterol and low-density lipoprotein cholesterol (LDL-C). In addition, hypothyroidism induced hypertrophy of adipocytes and a major infiltration of CD68+ macrophages into the periovarian VAT. Our results suggest that the reduced size of ovarian follicles promoted by hypothyroidism could be associated with dyslipidemias, hypertrophy, and inflammation of the periovarian VAT. Present findings may be useful to understand the influence of hypothyroidism in the ovary function in adulthood. PMID:28133606

  3. Arecibo Radar Observation of Near-Earth Asteroids: Expanded Sample Size, Determination of Radar Albedos, and Measurements of Polarization Ratios

    Science.gov (United States)

    Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.

    2017-10-01

    The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we

  4. A simple and fast method to study the hydrodynamic size difference of protein disulfide isomerase in oxidized and reduced form using gold nanoparticles and dynamic light scattering.

    Science.gov (United States)

    Zheng, Tianyu; Cherubin, Patrick; Cilenti, Lucia; Teter, Ken; Huo, Qun

    2016-02-07

    The hydrodynamic dimension of a protein is a reflection of both its molecular weight and its tertiary structures. Studying the hydrodynamic dimensions of proteins in solutions can help elucidate the structural properties of proteins. Here we report a simple and fast method to measure the hydrodyamic size of a relatively small protein, protein disulfide isomerase (PDI), using gold nanoparticle probes combined with dynamic light scattering. Proteins can readily adsorb to citrate-capped gold nanoparticles to form a protein corona. By measuring the average diameter of the gold nanoparticles before and after protein corona formation, the hydrodynamic diameter of the protein can be deduced from the net particle size increase of the assay solution. This study found that when the disulfide bonds in PDI are reduced to thiols, the reduced PDI exhibits a smaller hydrodynamic diameter than the oxided PDI. This finding is in good agreement with the X-ray diffraction analysis of PDI in single crystals. In comparison with other techniques that are used for protein hydrodynamic size analysis, the current method is easy to use, requires a trace amount of protein samples, with results obtained in minutes instead of hours.

  5. Capture efficiency and size selectivity of sampling gears targeting red-swamp crayfish in several freshwater habitats

    Directory of Open Access Journals (Sweden)

    Paillisson J.-M.

    2011-05-01

    Full Text Available The ecological importance of the red-swamp crayfish (Procambarus clarkii in the functioning of freshwater aquatic ecosystems is becoming more evident. It is important to know the limitations of sampling methods targeting this species, because accurate determination of population characteristics is required for predicting the ecological success of P. clarkii and its potential impacts on invaded ecosystems. In the current study, we addressed the question of trap efficiency by comparing population structure provided by eight trap devices (varying in number and position of entrances, mesh size, trap size and construction materials in three habitats (a pond, a reed bed and a grassland in a French marsh in spring 2010. Based on a large collection of P. clarkii (n = 2091, 272 and 213 respectively in the pond, reed bed and grassland habitats, we found that semi-cylindrical traps made from 5.5 mm mesh galvanized steel wire (SCG were the most efficient in terms of catch probability (96.7–100% compared to 15.7–82.8% depending on trap types and habitats and catch-per-unit effort (CPUE: 15.3, 6.0 and 5.1 crayfish·trap-1·24 h-1 compared to 0.2–4.4, 2.9 and 1.7 crayfish·trap-1·24 h-1 by the other types of fishing gear in the pond, reed bed and grassland respectively. The SCG trap was also the most effective for sampling all size classes, especially small individuals (carapace length \\hbox{$\\leqslant 30$} ⩽ 30 mm. Sex ratio was balanced in all cases. SCG could be considered as appropriate trapping gear to likely give more realistic information about P. clarkii population characteristics than many other trap types. Further investigation is needed to assess the catching effort required for ultimately proposing a standardised sampling method in a large range of habitats.

  6. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Science.gov (United States)

    Vasiliu, Daniel; Clamons, Samuel; McDonough, Molly; Rabe, Brian; Saha, Margaret

    2015-01-01

    Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED). Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  7. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  8. Hierarchical distance-sampling models to estimate population size and habitat-specific abundance of an island endemic.

    Science.gov (United States)

    Sillett, T Scott; Chandler, Richard B; Royle, J Andrew; Kery, Marc; Morrison, Scott A

    2012-10-01

    Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural

  9. Remediation of chromium-contaminated water using biogenic nano-sized materials and metal-reducing bacteria.

    Science.gov (United States)

    Seo, Hyunhee; Sun, Eunyoung; Roh, Yul

    2013-06-01

    As an environmental nanotechnology, nano-sized materials have the potential to create novel and effective in-situ and ex-situ treatments for contaminated groundwater due to its high catalytic reactivity, large surface area, and dispersibility. In this study the efficiency of Cr(VI) reduction and immobilization using biotic and abiotic nano-sized materials (NSMs) and metal-reducing bacteria (MRB) was evaluated to remediate Cr(VI)-contaminated groundwater in batch and column tests. The results of this study revealed that the combination of the mixed MRB and bio-FeS/siderite performed the highest efficiency of Cr(VI) reduction and immobilization. Cr(VI) reduction by MRB and NSMs could impact on solubility of Cr(VI) and geochemical changes favorable for precipitation and adsorption.

  10. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Science.gov (United States)

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  11. Riociguat reduces infarct size and post-infarct heart failure in mouse hearts: insights from MRI/PET imaging.

    Directory of Open Access Journals (Sweden)

    Carmen Methner

    Full Text Available Stimulation of the nitric oxide (NO--soluble guanylate (sGC--protein kinase G (PKG pathway confers protection against acute ischaemia/reperfusion injury, but more chronic effects in reducing post-myocardial infarction (MI heart failure are less defined. The aim of this study was to not only determine whether the sGC stimulator riociguat reduces infarct size but also whether it protects against the development of post-MI heart failure.Mice were subjected to 30 min ischaemia via ligation of the left main coronary artery to induce MI and either placebo or riociguat (1.2 µmol/l were given as a bolus 5 min before and 5 min after onset of reperfusion. After 24 hours, both, late gadolinium-enhanced magnetic resonance imaging (LGE-MRI and (18F-FDG-positron emission tomography (PET were performed to determine infarct size. In the riociguat-treated mice, the resulting infarct size was smaller (8.5 ± 2.5% of total LV mass vs. 21.8% ± 1.7%. in controls, p = 0.005 and LV systolic function analysed by MRI was better preserved (60.1% ± 3.4% of preischaemic vs. 44.2% ± 3.1% in controls, p = 0.005. After 28 days, LV systolic function by echocardiography treated group was still better preserved (63.5% ± 3.2% vs. 48.2% ± 2.2% in control, p = 0.004.Taken together, mice treated acutely at the onset of reperfusion with the sGC stimulator riociguat have smaller infarct size and better long-term preservation of LV systo