Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Sample size determination and power
Ryan, Thomas P, Jr
2013-01-01
THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.
How Sample Size Affects a Sampling Distribution
Mulekar, Madhuri S.; Siegel, Murray H.
2009-01-01
If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…
Sample Size Estimation: The Easy Way
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Bhandari, Mohit; Tornetta, Paul; Rampersad, Shelly-Ann; Sprague, Sheila; Heels-Ansdell, Diane; Sanders, David W.; Schemitsch, Emil H.; Swiontkowski, Marc; Walter, Stephen; Guyatt, Gordon; Buckingham, Lisa; Leece, Pamela; Viveiros, Helena; Mignott, Tashay; Ansell, Natalie; Sidorkewicz, Natalie; Agel, Julie; Bombardier, Claire; Berlin, Jesse A.; Bosse, Michael; Browner, Bruce; Gillespie, Brenda; O'Brien, Peter; Poolman, Rudolf; Macleod, Mark D.; Carey, Timothy; Leitch, Kellie; Bailey, Stuart; Gurr, Kevin; Konito, Ken; Bartha, Charlene; Low, Isolina; MacBean, Leila V.; Ramu, Mala; Reiber, Susan; Strapp, Ruth; Tieszer, Christina; Kreder, Hans; Stephen, David J. G.; Axelrod, Terry S.; Yee, Albert J. M.; Richards, Robin R.; Finkelstein, Joel; Holtby, Richard M.; Cameron, Hugh; Cameron, John; Gofton, Wade; Murnaghan, John; Schatztker, Joseph; Bulmer, Beverly; Conlan, Lisa; Laflamme, Yves; Berry, Gregory; Beaumont, Pierre; Ranger, Pierre; Laflamme, Georges-Henri; Jodoin, Alain; Renaud, Eric; Gagnon, Sylvain; Maurais, Gilles; Malo, Michel; Fernandes, Julio; Latendresse, Kim; Poirier, Marie-France; Daigneault, Gina; McKee, Michael M.; Waddell, James P.; Bogoch, Earl R.; Daniels, Timothy R.; McBroom, Robert R.; Vicente, Milena R.; Storey, Wendy; Wild, Lisa M.; McCormack, Robert; Perey, Bertrand; Goetz, Thomas J.; Pate, Graham; Penner, Murray J.; Panagiotopoulos, Kostas; Pirani, Shafique; Dommisse, Ian G.; Loomer, Richard L.; Stone, Trevor; Moon, Karyn; Zomar, Mauri; Webb, Lawrence X.; Teasdall, Robert D.; Birkedal, John Peter; Martin, David Franklin; Ruch, David S.; Kilgus, Douglas J.; Pollock, David C.; Harris, Mitchel Brion; Wiesler, Ethan Ron; Ward, William G.; Shilt, Jeffrey Scott; Koman, Andrew L.; Poehling, Gary G.; Kulp, Brenda; Creevy, William R.; Stein, Andrew B.; Bono, Christopher T.; Einhorn, Thomas A.; Brown, T. Desmond; Pacicca, Donna; Sledge, John B.; Foster, Timothy E.; Voloshin, Ilva; Bolton, Jill; Carlisle, Hope; Shaughnessy, Lisa; Ombremsky, William T.; LeCroy, C. Michael; Meinberg, Eric G.; Messer, Terry M.; Craig, William L.; Dirschl, Douglas R.; Caudle, Robert; Harris, Tim; Elhert, Kurt; Hage, William; Jones, Robert; Piedrahita, Luis; Schricker, Paul O.; Driver, Robin; Godwin, Jean; Hansley, Gloria; Obremskey, William Todd; Kregor, Philip James; Tennent, Gregory; Truchan, Lisa M.; Sciadini, Marcus; Shuler, Franklin D.; Driver, Robin E.; Nading, Mary Alice; Neiderstadt, Jacky; Vap, Alexander R.; Vallier, Heather A.; Patterson, Brendan M.; Wilber, John H.; Wilber, Roger G.; Sontich, John K.; Moore, Timothy Alan; Brady, Drew; Cooperman, Daniel R.; Davis, John A.; Cureton, Beth Ann; Mandel, Scott; Orr, R. Douglas; Sadler, John T. S.; Hussain, Tousief; Rajaratnam, Krishan; Petrisor, Bradley; Drew, Brian; Bednar, Drew A.; Kwok, Desmond C. H.; Pettit, Shirley; Hancock, Jill; Cole, Peter A.; Smith, Joel J.; Brown, Gregory A.; Lange, Thomas A.; Stark, John G.; Levy, Bruce; Swiontkowski, Marc F.; Garaghty, Mary J.; Salzman, Joshua G.; Schutte, Carol A.; Tastad, Linda Toddie; Vang, Sandy; Seligson, David; Roberts, Craig S.; Malkani, Arthur L.; Sanders, Laura; Gregory, Sharon Allen; Dyer, Carmen; Heinsen, Jessica; Smith, Langan; Madanagopal, Sudhakar; Coupe, Kevin J.; Tucker, Jeffrey J.; Criswell, Allen R.; Buckle, Rosemary; Rechter, Alan Jeffrey; Sheth, Dhiren Shaskikant; Urquart, Brad; Trotscher, Thea; Anders, Mark J.; Kowalski, Joseph M.; Fineberg, Marc S.; Bone, Lawrence B.; Phillips, Matthew J.; Rohrbacher, Bernard; Stegemann, Philip; Mihalko, William M.; Buyea, Cathy; Augustine, Stephen J.; Jackson, William Thomas; Solis, Gregory; Ero, Sunday U.; Segina, Daniel N.; Berrey, Hudson B.; Agnew, Samuel G.; Fitzpatrick, Michael; Campbell, Lakina C.; Derting, Lynn; McAdams, June; Goslings, J. Carel; Ponsen, Kees Jan; Luitse, Jan; Kloen, Peter; Joosse, Pieter; Winkelhagen, Jasper; Duivenvoorden, Raphaël; Teague, David C.; Davey, Joseph; Sullivan, J. Andy; Ertl, William J. J.; Puckett, Timothy A.; Pasque, Charles B.; Tompkins, John F.; Gruel, Curtis R.; Kammerlocher, Paul; Lehman, Thomas P.; Puffinbarger, William R.; Carl, Kathy L.; Weber, Donald W.; Jomha, Nadr M.; Goplen, Gordon R.; Masson, Edward; Beaupre, Lauren A.; Greaves, Karen E.; Schaump, Lori N.; Jeray, Kyle J.; Goetz, David R.; Westberry, Davd E.; Broderick, J. Scott; Moon, Bryan S.; Tanner, Stephanie L.; Powell, James N.; Buckley, Richard E.; Elves, Leslie; Connolly, Stephen; Abraham, Edward P.; Eastwood, Donna; Steele, Trudy; Ellis, Thomas; Herzberg, Alex; Brown, George A.; Crawford, Dennis E.; Hart, Robert; Hayden, James; Orfaly, Robert M.; Vigland, Theodore; Vivekaraj, Maharani; Bundy, Gina L.; Miclau, Theodore; Matityahu, Amir; Coughlin, R. Richard; Kandemir, Utku; McClellan, R. Trigg; Lin, Cindy Hsin-Hua; Karges, David; Cramer, Kathryn; Watson, J. Tracy; Moed, Berton; Scott, Barbara; Beck, Dennis J.; Orth, Carolyn; Puskas, David; Clark, Russell; Jones, Jennifer; Egol, Kenneth A.; Paksima, Nader; France, Monet; Wai, Eugene K.; Johnson, Garth; Wilkinson, Ross; Gruszczynski, Adam T.; Vexler, Liisa
2013-01-01
Inadequate sample size and power in randomized trials can result in misleading findings. This study demonstrates the effect of sample size in a large clinical trial by evaluating the results of the Study to Prospectively evaluate Reamed Intramedullary Nails in Patients with Tibial fractures (SPRINT)
Current sample size conventions: Flaws, harms, and alternatives
Directory of Open Access Journals (Sweden)
Bacchetti Peter
2010-03-01
Full Text Available Abstract Background The belief remains widespread that medical research studies must have statistical power of at least 80% in order to be scientifically sound, and peer reviewers often question whether power is high enough. Discussion This requirement and the methods for meeting it have severe flaws. Notably, the true nature of how sample size influences a study's projected scientific or practical value precludes any meaningful blanket designation of value of information methods, simple choices based on cost or feasibility that have recently been justified, sensitivity analyses that examine a meaningful array of possible findings, and following previous analogous studies. To promote more rational approaches, research training should cover the issues presented here, peer reviewers should be extremely careful before raising issues of "inadequate" sample size, and reports of completed studies should not discuss power. Summary Common conventions and expectations concerning sample size are deeply flawed, cause serious harm to the research process, and should be replaced by more rational alternatives.
Brantsæter, Anne Lise; Knutsen, Helle Katrine; Johansen, Nina Cathrine; Nyheim, Kristine Aastad; Erlund, Iris; Meltzer, Helle Margrete; Henjum, Sigrun
2018-02-17
Inadequate iodine intake has been identified in populations considered iodine replete for decades. The objective of the current study is to evaluate urinary iodine concentration (UIC) and the probability of adequate iodine intake in subgroups of the Norwegian population defined by age, life stage and vegetarian dietary practice. In a cross-sectional survey, we assessed the probability of adequate iodine intake by two 24-h food diaries and UIC from two fasting morning spot urine samples in 276 participants. The participants included children ( n = 47), adolescents ( n = 46), adults ( n = 71), the elderly ( n = 23), pregnant women ( n = 45), ovo-lacto vegetarians ( n = 25), and vegans ( n = 19). In all participants combined, the median (95% CI) UIC was 101 (90, 110) µg/L, median (25th, 75th percentile) calculated iodine intake was 112 (77, 175) µg/day and median (25th, 75th percentile) estimated usual iodine intake was 101 (75, 150) µg/day. According to WHOs criteria for evaluation of median UIC, iodine intake was inadequate in the elderly, pregnant women, vegans and non-pregnant women of childbearing age. Children had the highest (82%) and vegans the lowest (14%) probability of adequate iodine intake according to reported food and supplement intakes. This study confirms the need for monitoring iodine intake and status in nationally representative study samples in Norway.
Directory of Open Access Journals (Sweden)
Anne Lise Brantsæter
2018-02-01
Full Text Available Inadequate iodine intake has been identified in populations considered iodine replete for decades. The objective of the current study is to evaluate urinary iodine concentration (UIC and the probability of adequate iodine intake in subgroups of the Norwegian population defined by age, life stage and vegetarian dietary practice. In a cross-sectional survey, we assessed the probability of adequate iodine intake by two 24-h food diaries and UIC from two fasting morning spot urine samples in 276 participants. The participants included children (n = 47, adolescents (n = 46, adults (n = 71, the elderly (n = 23, pregnant women (n = 45, ovo-lacto vegetarians (n = 25, and vegans (n = 19. In all participants combined, the median (95% CI UIC was 101 (90, 110 µg/L, median (25th, 75th percentile calculated iodine intake was 112 (77, 175 µg/day and median (25th, 75th percentile estimated usual iodine intake was 101 (75, 150 µg/day. According to WHOs criteria for evaluation of median UIC, iodine intake was inadequate in the elderly, pregnant women, vegans and non-pregnant women of childbearing age. Children had the highest (82% and vegans the lowest (14% probability of adequate iodine intake according to reported food and supplement intakes. This study confirms the need for monitoring iodine intake and status in nationally representative study samples in Norway.
Sample size in qualitative interview studies
DEFF Research Database (Denmark)
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane
2016-01-01
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...
Basic Statistical Concepts for Sample Size Estimation
Directory of Open Access Journals (Sweden)
Vithal K Dhulkhed
2008-01-01
Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.
Impact of shoe size in a sample of elderly individuals
Directory of Open Access Journals (Sweden)
Daniel López-López
Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.
Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests
Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.
2015-01-01
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…
Sample size tables for clinical studies
National Research Council Canada - National Science Library
Machin, David
2009-01-01
... with sample size software S S S , which we hope will give the user even greater ﬂexibility and easy access to a wide range of designs, and allow design parameters to be tailored more readily to speciﬁc problems. Further, as some early phase designs are adaptive in nature and require knowledge of earlier patients' response to determine t...
Predicting sample size required for classification performance
Directory of Open Access Journals (Sweden)
Figueroa Rosa L
2012-02-01
Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Sample size for morphological traits of pigeonpea
Directory of Open Access Journals (Sweden)
Giovani Facco
2015-12-01
Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons.
Determining sample size when assessing mean equivalence.
Asberg, Arne; Solem, Kristine B; Mikkelsen, Gustav
2014-11-01
When we want to assess whether two analytical methods are equivalent, we could test if the difference between the mean results is within the specification limits of 0 ± an acceptance criterion. Testing the null hypothesis of zero difference is less interesting, and so is the sample size estimation based on testing that hypothesis. Power function curves for equivalence testing experiments are not widely available. In this paper we present power function curves to help decide on the number of measurements when testing equivalence between the means of two analytical methods. Computer simulation was used to calculate the probability that the 90% confidence interval for the difference between the means of two analytical methods would exceed the specification limits of 0 ± 1, 0 ± 2 or 0 ± 3 analytical standard deviations (SDa), respectively. The probability of getting a nonequivalence alarm increases with increasing difference between the means when the difference is well within the specification limits. The probability increases with decreasing sample size and with smaller acceptance criteria. We may need at least 40-50 measurements with each analytical method when the specification limits are 0 ± 1 SDa, and 10-15 and 5-10 when the specification limits are 0 ± 2 and 0 ± 3 SDa, respectively. The power function curves provide information of the probability of false alarm, so that we can decide on the sample size under less uncertainty.
Fowler, Patrick J; Henry, David B; Schoeny, Michael; Landsverk, John; Chavira, Dina; Taylor, Jeremy J
2013-09-01
This study aimed to estimate the prevalence of inadequate housing that threaten out-of-home placement among families under investigation by child welfare. Data came from the National Survey of Child and Adolescent Well-Being, a nationally representative longitudinal survey of child welfare-involved families. Child protective services caseworkers as well as caregivers provided information on families whose child remained in the home after initial investigation (N = 3,867). Multilevel latent class analyses tested the presence of inadequately housed subgroups using 4 housing problem indicators at baseline. Logistic regressions assessed convergent and predictive validity. A two class latent solution best fit the data. Findings indicated that inadequate housing contributed to risk for out-of-home placement in approximately 16 % of intact families under investigation by child protective services. These families were 4 times more likely to need housing services 12 months later. Federal legislation emphasizes integration of social services as necessary to end homelessness. This study demonstrates overlap across public agencies. Enhanced coordination of child welfare and housing services facilitates interventions to prevent and mitigate homelessness.
Correlation of Scan and Sample Measurements Using Appropriate Sample Size
International Nuclear Information System (INIS)
Lux, Jeff
2008-01-01
, gamma count rates were elevated, but samples yielded background concentrations of thorium. Gamma scans tended to correlate with gamma exposure rate measurements. The lack of correlation between scan and sample data threatened to invalidate the characterization methodology, because neither method demonstrated reliability in identifying material which required excavation, shipment, and disposal. The NRC-approved site decommissioning plan required the excavation of any material that exceeded the Criteria based on either measurement. It was necessary to resolve the differences between the various types of measurements. Health Physics technicians selected 27 locations where the relationship between scan measurements and sample counts was highly variable. To determine if 'shine' was creating this data-correlation problem, they returned to those 27 locations to collect gamma count rates with a lead shielded NaI detector. Figure 2 shows that the shielded and unshielded count rates correlated fairly well for those locations. However, the technicians also noted the presence of 'tar balls' in this area. These small chunks of tarry material typically varied in size from 2 - 10 mm in diameters. Thorium-contaminated tars had apparently been disked into the soil to biodegrade the tar. The technicians evaluated the samples, and determined that the samples yielding higher activity contained more tar balls, and the samples yielding near background levels had fewer or none. The tar was collected for analysis, and its thorium activity varied from 2 - 3 Bq/g (60 - 90 pCi/g) total thorium. Since the sample mass was small, these small tar balls greatly impacted the sample activity. Technicians determined that the maximum particle size was less than 20 mm in diameter. Based on this maximum 'particle size', over one kilogram of sample would be required to minimize the impact of the tar balls on sample results. They returned to the same 27 locations and collected soil samples containing at least
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.
Sample size determination in clinical trials with multiple endpoints
Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R
2015-01-01
This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
The influence of sample size on the determination of population ...
African Journals Online (AJOL)
Reliable measures of population sizes of endangered and vulnerable species are difficult to achieve because of high variability in population sizes and logistic constraints on sample sizes, yet such measures are crucial for the determination of the success of conservation and management strategies aimed at curbing ...
Estimating population size with correlated sampling unit estimates
David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey
2003-01-01
Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on markârecapture or distance sampling methods occur...
Approaches to sample size determination for multivariate data
Saccenti, Edoardo; Timmerman, Marieke E.
2016-01-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical
Sample size computation for association studies using case–parents ...
Indian Academy of Sciences (India)
sample size for case–control association studies is discussed. Materials and methods. Parameter settings. We consider a candidate locus with two alleles A and a where. A is putatively associated with the disease status (increasing. Keywords. sample size; association tests; genotype relative risk; power; autism. Journal of ...
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.
Sample size for estimating average productive traits of pigeon pea
Directory of Open Access Journals (Sweden)
Giovani Facco
2016-04-01
Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...
Sample size computation for association studies using case–parents ...
Indian Academy of Sciences (India)
ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...
Sample size re-estimation in a breast cancer trial.
Hade, Erinn M; Jarjoura, David; Lai Wei
2010-06-01
During the recruitment phase of a randomized breast cancer trial, investigating the time to recurrence, we found a strong suggestion that the failure probabilities used at the design stage were too high. Since most of the methodological research involving sample size re-estimation has focused on normal or binary outcomes, we developed a method which preserves blinding to re-estimate sample size in our time to event trial. A mistakenly high estimate of the failure rate at the design stage may reduce the power unacceptably for a clinically important hazard ratio. We describe an ongoing trial and an application of a sample size re-estimation method that combines current trial data with prior trial data or assumes a parametric model to re-estimate failure probabilities in a blinded fashion. Using our current blinded trial data and additional information from prior studies, we re-estimate the failure probabilities to be used in sample size re-calculation. We employ bootstrap re-sampling to quantify uncertainty in the re-estimated sample sizes. At the time of re-estimation data from 278 patients were available, averaging 1.2 years of follow up. Using either method, we estimated a sample size increase of zero for the hazard ratio because the estimated failure probabilities at the time of re-estimation differed little from what was expected. We show that our method of blinded sample size re-estimation preserves the type I error rate. We show that when the initial guess of the failure probabilities are correct, the median increase in sample size is zero. Either some prior knowledge of an appropriate survival distribution shape or prior data is needed for re-estimation. In trials when the accrual period is lengthy, blinded sample size re-estimation near the end of the planned accrual period should be considered. In our examples, when assumptions about failure probabilities and HRs are correct the methods usually do not increase sample size or otherwise increase it by very
Sample size determination for equivalence assessment with multiple endpoints.
Sun, Anna; Dong, Xiaoyu; Tsong, Yi
2014-01-01
Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.
Bayesian Sample Size Determination For The Accurate Identification ...
African Journals Online (AJOL)
Background & Aim: Sample size estimation is a major component of the design of virtually every experiment in biosciences. Microbiologists face a challenge when allocating resources to surveys designed to determine the sampling unit of bacterial strains of interest. In this study we derived a Bayesian approach with a ...
Directory of Open Access Journals (Sweden)
R. Eric Heidel
2016-01-01
Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Optimal sample size for probability of detection curves
International Nuclear Information System (INIS)
Annis, Charles; Gandossi, Luca; Martin, Oliver
2013-01-01
Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not
Directory of Open Access Journals (Sweden)
Franklin Obeng-Odoom
2011-01-01
Full Text Available Two themes are evident in housing research in Ghana. One involves the study of how to increase the number of dwellings to correct the overall housing deficit, and the other focuses on how to improve housing for slum dwellers. Between these two extremes, there is relatively little research on why the existing buildings are poorly maintained. This paper is based on a review of existing studies on inadequate housing. It synthesises the evidence on the possible reasons for this neglect, makes a case for better maintenance and analyses possible ways of reversing the problem of inadequate housing.
Approximate sample size calculations with microarray data: an illustration
Ferreira, José A.; Zwinderman, Aeilko
2006-01-01
We outline a method of sample size calculation in microarray experiments on the basis of pilot data and illustrate its practical application with both simulated and real data. The method was shown to be consistent (as the number of 'probed genes' tends to infinity) under general conditions in an
Sample size for collecting germplasms – a polyploid model with ...
Indian Academy of Sciences (India)
Unknown
Conservation; diploid; exploration; germplasm; inbreeding; polyploid; seeds ... A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate combination of number of plants and seeds per plant. ..... able saving of resources during collection and storage of.
Sample size computation for association studies using case–parents ...
Indian Academy of Sciences (India)
jgen/085/03/0187-0191. Keywords. sample size; association tests; genotype relative risk; power; autism. Author Affiliations. Najla Kharrat1 Imen Ayadi1 Ahmed Rebaï1. Unité de Biostatistique et de Bioinformatique, Centre de Biotechnologie de ...
Estimating wildlife activity curves: comparison of methods and sample size.
Lashley, Marcus A; Cove, Michael V; Chitwood, M Colter; Penido, Gabriel; Gardner, Beth; DePerno, Chris S; Moorman, Chris E
2018-03-08
Camera traps and radiotags commonly are used to estimate animal activity curves. However, little empirical evidence has been provided to validate whether they produce similar results. We compared activity curves from two common camera trapping techniques to those from radiotags with four species that varied substantially in size (~1 kg-~50 kg), diet (herbivore, omnivore, carnivore), and mode of activity (diurnal and crepuscular). Also, we sub-sampled photographs of each species with each camera trapping technique to determine the minimum sample size needed to maintain accuracy and precision of estimates. Camera trapping estimated greater activity during feeding times than radiotags in all but the carnivore, likely reflective of the close proximity of foods readily consumed by all species except the carnivore (i.e., corn bait or acorns). However, additional analyses still indicated both camera trapping methods produced relatively high overlap and correlation to radiotags. Regardless of species or camera trapping method, mean overlap increased and overlap error decreased rapidly as sample sizes increased until an asymptote near 100 detections which we therefore recommend as a minimum sample size. Researchers should acknowledge that camera traps and radiotags may estimate the same mode of activity but differ in their estimation of magnitude in activity peaks.
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-11-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation.
Determining sample size for assessing species composition in ...
African Journals Online (AJOL)
Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...
Research Note Pilot survey to assess sample size for herbaceous ...
African Journals Online (AJOL)
A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...
Blinded sample size re-estimation in crossover bioequivalence trials.
Golkowski, Daniel; Friede, Tim; Kieser, Meinhard
2014-01-01
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re-estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re-estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre-specified level. Furthermore, some refinements of the re-estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.
Sample size and power calculation for molecular biology studies.
Jung, Sin-Ho
2010-01-01
Sample size calculation is a critical procedure when designing a new biological study. In this chapter, we consider molecular biology studies generating huge dimensional data. Microarray studies are typical examples, so that we state this chapter in terms of gene microarray data, but the discussed methods can be used for design and analysis of any molecular biology studies involving high-dimensional data. In this chapter, we discuss sample size calculation methods for molecular biology studies when the discovery of prognostic molecular markers is performed by accurately controlling false discovery rate (FDR) or family-wise error rate (FWER) in the final data analysis. We limit our discussion to the two-sample case.
Aerosol Sampling Bias from Differential Electrostatic Charge and Particle Size
Jayjock, Michael Anthony
Lack of reliable epidemiological data on long term health effects of aerosols is due in part to inadequacy of sampling procedures and the attendant doubt regarding the validity of the concentrations measured. Differential particle size has been widely accepted and studied as a major potential biasing effect in the sampling of such aerosols. However, relatively little has been done to study the effect of electrostatic particle charge on aerosol sampling. The objective of this research was to investigate the possible biasing effects of differential electrostatic charge, particle size and their interaction on the sampling accuracy of standard aerosol measuring methodologies. Field studies were first conducted to determine the levels and variability of aerosol particle size and charge at two manufacturing facilities making acrylic powder. The field work showed that the particle mass median aerodynamic diameter (MMAD) varied by almost an order of magnitude (4-34 microns) while the aerosol surface charge was relatively stable (0.6-0.9 micro coulombs/m('2)). The second part of this work was a series of laboratory experiments in which aerosol charge and MMAD were manipulated in a 2('n) factorial design with the percentage of sampling bias for various standard methodologies as the dependent variable. The experiments used the same friable acrylic powder studied in the field work plus two size populations of ground quartz as a nonfriable control. Despite some ill conditioning of the independent variables due to experimental difficulties, statistical analysis has shown aerosol charge (at levels comparable to those measured in workroom air) is capable of having a significant biasing effect. Physical models consistent with the sampling data indicate that the level and bipolarity of the aerosol charge are determining factors in the extent and direction of the bias.
Sample size for detecting differentially expressed genes in microarray experiments
Directory of Open Access Journals (Sweden)
Li Jiangning
2004-11-01
Full Text Available Abstract Background Microarray experiments are often performed with a small number of biological replicates, resulting in low statistical power for detecting differentially expressed genes and concomitant high false positive rates. While increasing sample size can increase statistical power and decrease error rates, with too many samples, valuable resources are not used efficiently. The issue of how many replicates are required in a typical experimental system needs to be addressed. Of particular interest is the difference in required sample sizes for similar experiments in inbred vs. outbred populations (e.g. mouse and rat vs. human. Results We hypothesize that if all other factors (assay protocol, microarray platform, data pre-processing were equal, fewer individuals would be needed for the same statistical power using inbred animals as opposed to unrelated human subjects, as genetic effects on gene expression will be removed in the inbred populations. We apply the same normalization algorithm and estimate the variance of gene expression for a variety of cDNA data sets (humans, inbred mice and rats comparing two conditions. Using one sample, paired sample or two independent sample t-tests, we calculate the sample sizes required to detect a 1.5-, 2-, and 4-fold changes in expression level as a function of false positive rate, power and percentage of genes that have a standard deviation below a given percentile. Conclusions Factors that affect power and sample size calculations include variability of the population, the desired detectable differences, the power to detect the differences, and an acceptable error rate. In addition, experimental design, technical variability and data pre-processing play a role in the power of the statistical tests in microarrays. We show that the number of samples required for detecting a 2-fold change with 90% probability and a p-value of 0.01 in humans is much larger than the number of samples commonly used in
Development of sample size allocation program using hypergeometric distribution
International Nuclear Information System (INIS)
Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik
1996-01-01
The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height a...
Simple and multiple linear regression: sample size considerations.
Hanley, James A
2016-11-01
The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-05-01
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Sample size for monitoring sirex populations and their natural enemies
Directory of Open Access Journals (Sweden)
Susete do Rocio Chiarello Penteado
2016-09-01
Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.
Luo, Maoyi; Xing, Shan; Yang, Yonggang; Song, Lijuan; Ma, Yan; Wang, Yadong; Dai, Xiongxin; Happel, Steffen
2018-07-01
There is a growing demand for the determination of actinides in soil and sediment samples for environmental monitoring and tracing, radiological protection, and nuclear forensic reasons. A total sample dissolution method based on lithium metaborate fusion, followed by sequential column chromatography separation, was developed for simultaneous determination of Pu, Am and Cm isotopes in large-size environmental samples by alpha spectrometry and mass spectrometric techniques. The overall recoveries of both Pu and Am for the entire procedure were higher than 70% for large-size soil samples. The method was validated using 20 g of soil samples spiked with known amounts of 239 Pu and 241 Am as well as the certified reference materials IAEA-384 (Fangataufa Lagoon sediment) and IAEA-385 (Irish Sea sediment). All the measured results agreed very well with the expected values. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimal Sample Size for Probability of Detection Curves
International Nuclear Information System (INIS)
Annis, Charles; Gandossi, Luca; Martin, Oliver
2012-01-01
The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)
Sample Size of One: Operational Qualitative Analysis in the Classroom
Directory of Open Access Journals (Sweden)
John Hoven
2015-10-01
Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.
Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin
2014-01-01
This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Energy Technology Data Exchange (ETDEWEB)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Directory of Open Access Journals (Sweden)
Suzan Gunduz
2016-01-01
Conclusion: Inadequate vitamin D and poor sleep quality are prevalent in pregnant women, but low levels of vitamin D are not associated with poor sleep quality. Further studies with larger sample sizes and studies that include preterm deliveries and special sleep disorders should be performed to understand this issue better.
Sample size and power calculation for univariate case in quantile regression
Yanuar, Ferra
2018-01-01
The purpose of this study is to calculate the statistical power and sample size in simple linear regression model based on quantile approach. The statistical theoretical framework isthen implemented to generate data using R. For any given covariate and regression coefficient, we generate a random variable and error. There are two conditions for error distributions here; normal and nonnormal distribution. This study resulted that for normal error term, sample size is large if the effect size is small. Meanwhile, the level of statistical power is also affected by effect size, the more effect size the more level of power. For nonnormal error terms, it isn’t recommended using small effect size, moderate effect size unless sample size more than 320 and large effect size unless sample size more than 160 because it resulted in lower statistical power.
Bill, Anthony; Henderson, Sally; Penman, John
2010-01-01
Two test items that examined high school students' beliefs of sample size for large populations using the context of opinion polls conducted prior to national and state elections were developed. A trial of the two items with 21 male and 33 female Year 9 students examined their naive understanding of sample size: over half of students chose a…
Determination of boron concentration in biopsy-sized tissue samples
International Nuclear Information System (INIS)
Hou, Yougjin; Fong, Katrina; Edwards, Benjamin; Autry-Conwell, Susan; Boggan, James
2000-01-01
Inductively coupled plasma mass spectrometry (ICP-MS) is the most sensitive analytical method for boron determination. However, because boron is volatile and ubiquitous in nature, low-concentration boron sample measurement remains a challenge. In this study, an improved ICP-MS method was developed for quantitation of tissue samples with low (less than 10 ppb) and high (100 ppb) boron concentrations. The addition of an ammonia-mannitol solution converts volatile boric acid to the non-volatile ammonium borate in the spray chamber and with the formation of a boron-mannitol complex, the boron memory effect and background are greatly reduced. This results in measurements that are more accurate, repeatable, and efficient. This improved analysis method has facilitated rapid and reliable tissue biodistribution analyses of newly developed boronated compounds for potential use in neutron capture therapy. (author)
Sample size reduction in groundwater surveys via sparse data assimilation
Hussain, Z.
2013-04-01
In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.
Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !
van Breukelen, Gerard J.P.; Candel, Math J.J.M.
2012-01-01
Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given
Structured estimation - Sample size reduction for adaptive pattern classification
Morgera, S.; Cooper, D. B.
1977-01-01
The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.
Sample size reassessment for a two-stage design controlling the false discovery rate.
Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin
2015-11-01
Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide.
de Bekker-Grob, Esther W; Donkers, Bas; Jonker, Marcel F; Stolk, Elly A
2015-10-01
Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care.
Issues of sample size in sensitivity and specificity analysis with special reference to oncology
Directory of Open Access Journals (Sweden)
Atul Juneja
2015-01-01
Full Text Available Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.
Consequences of Inadequate Physical Activity
Centers for Disease Control (CDC) Podcasts
2018-03-27
Listen as CDC Epidemiologist Susan Carlson, PhD, talks about her research, which estimates the percentage of US deaths attributed to inadequate levels of physical activity. Created: 3/27/2018 by Preventing Chronic Disease (PCD), National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP). Date Released: 3/27/2018.
Radiologists' responses to inadequate referrals
International Nuclear Information System (INIS)
Lysdahl, Kristin Bakke; Hofmann, Bjoern Morten; Espeland, Ansgar
2010-01-01
To investigate radiologists' responses to inadequate imaging referrals. A survey was mailed to Norwegian radiologists; 69% responded. They graded the frequencies of actions related to referrals with ambiguous indications or inappropriate examination choices and the contribution of factors preventing and not preventing an examination of doubtful usefulness from being performed as requested. Ninety-five percent (344/361) reported daily or weekly actions related to inadequate referrals. Actions differed among subspecialties. The most frequent were contacting the referrer to clarify the clinical problem and checking test results/information in the medical records. Both actions were more frequent among registrars than specialists and among hospital radiologists than institute radiologists. Institute radiologists were more likely to ask the patient for additional information and to examine the patient clinically. Factors rated as contributing most to prevent doubtful examinations were high risk of serious complications/side effects, high radiation dose and low patient age. Factors facilitating doubtful examinations included respect for the referrer's judgment, patient/next-of-kin wants the examination, patient has arrived, unreachable referrer, and time pressure. In summary, radiologists facing inadequate referrals considered patient safety and sought more information. Vetting referrals on arrival, easier access to referring clinicians, and time for radiologists to handle inadequate referrals may contribute to improved use of imaging. (orig.)
CT dose survey in adults: what sample size for what precision?
Energy Technology Data Exchange (ETDEWEB)
Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)
2017-01-15
To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A new model to describe the relationship between species richness and sample size
Directory of Open Access Journals (Sweden)
WenJun Zhang
2017-03-01
Full Text Available In the sampling of species richness, the number of newly found species declines as increase of sample size, and the number of distinct species tends to an upper asymptote as sample size tends to the infinity. This leads to a curve of species richness vs. sample size. In present study, I follow my principle proposed earlier (Zhang, 2016, and re-develop the model, y=K(1-e^(-rx/K, for describing the relationship between species richness (y and sample size (x, where K is the expected total number of distinct species, and r is the maximum variation of species richness per sample size (i.e., max dy/dx. Computer software and codes were given.
Impact of Sample Size on the Performance of Multiple-Model Pharmacokinetic Simulations▿
Tam, Vincent H.; Kabbara, Samer; Yeh, Rosa F.; Leary, Robert H.
2006-01-01
Monte Carlo simulations are increasingly used to predict pharmacokinetic variability of antimicrobials in a population. We investigated the sample size necessary to provide robust pharmacokinetic predictions. To obtain reasonably robust predictions, a nonparametric model derived from a sample population size of ≥50 appears to be necessary as the input information.
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for NYTD...
Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.
Algina, James; Moulder, Bradley C.; Moser, Barry K.
2002-01-01
Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Implications of sampling design and sample size for national carbon accounting systems.
Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel
2011-11-08
Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.
Sexual maturity, fecundity and egg size of wild and cultured samples of Bagrus bayad macropterus
Tsadu, S.M.; Lamai, S.L.; Oladimeji, A.A.
2003-01-01
Twenty four matured samples of Bagrus bayad macropterus from the wild (Shiroro Lake, Nigeria) and under captivity, size ranging from 412.69-3300.00 g total body weight, were analysed for sexual maturity,fecundity and egg size. The average fecundity obtained were 53352.59 and 21028.32 eggs for the wild and cultured fish respectively.Positive relationship was observed between fecundity, body size and gonad weight. Fecundity increased as body size increased. A more positive and linear relatio...
Bouman, A C; ten Cate-Hoek, A J; Ramaekers, B L T; Joore, M A
2015-01-01
Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health) consequences. These advantages however are not included in the classic frequentist approach of sample size calculation for non-inferiority trials. In contrast, the decision theory approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the frequentist approach and decision theory approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory approach is more appropriate for sample size calculation of non-inferiority trials. The frequentist approach and decision theory approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis. The two approaches differ substantially in conceptual background, analytical approach, and input requirements. The sample size calculated according to the frequentist approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory approach indicated that the optimal sample size was 500 patients, with a net value of €92 million. This study demonstrates and explains the differences between the classic frequentist approach and the decision theory approach of sample size calculation for non-inferiority trials. We argue that the decision theory approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials.
Placzek, Marius; Friede, Tim
2017-01-01
The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.
Two-sample binary phase 2 trials with low type I error and low sample size.
Litwin, Samuel; Basickes, Stanley; Ross, Eric A
2017-04-30
We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1 > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Chaibub Neto, Elias
2015-01-01
In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.
Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R
2008-08-01
The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.
On sample size estimation and re-estimation adjusting for variability in confirmatory trials.
Wu, Pei-Shien; Lin, Min; Chow, Shein-Chung
2016-01-01
Sample size estimation (SSE) is an important issue in the planning of clinical studies. While larger studies are likely to have sufficient power, it may be unethical to expose more patients than necessary to answer a scientific question. Budget considerations may also cause one to limit the study to an adequate size to answer the question at hand. Typically at the planning stage, a statistically based justification for sample size is provided. An effective sample size is usually planned under a pre-specified type I error rate, a desired power under a particular alternative and variability associated with the observations recorded. The nuisance parameter such as the variance is unknown in practice. Thus, information from a preliminary pilot study is often used to estimate the variance. However, calculating the sample size based on the estimated nuisance parameter may not be stable. Sample size re-estimation (SSR) at the interim analysis may provide an opportunity to re-evaluate the uncertainties using accrued data and continue the trial with an updated sample size. This article evaluates a proposed SSR method based on controlling the variability of nuisance parameter. A numerical study is used to assess the performance of proposed method with respect to the control of type I error. The proposed method and concepts could be extended to SSR approaches with respect to other criteria, such as maintaining effect size, achieving conditional power, and reaching a desired reproducibility probability.
The PowerAtlas: a power and sample size atlas for microarray experimental design and research
Directory of Open Access Journals (Sweden)
Wang Jelai
2006-02-01
Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.
The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing
Directory of Open Access Journals (Sweden)
Thomaz C. e C. da Costa
2004-12-01
Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.
Kikuchi, Takashi; Gittins, John
2011-08-01
The behavioural Bayes approach to sample size determination for clinical trials assumes that the number of subsequent patients switching to a new drug from the current drug depends on the strength of the evidence for efficacy and safety that was observed in the clinical trials. The optimal sample size is the one which maximises the expected net benefit of the trial. The approach has been developed in a series of papers by Pezeshk and the present authors (Gittins JC, Pezeshk H. A behavioral Bayes method for determining the size of a clinical trial. Drug Information Journal 2000; 34: 355-63; Gittins JC, Pezeshk H. How Large should a clinical trial be? The Statistician 2000; 49(2): 177-87; Gittins JC, Pezeshk H. A decision theoretic approach to sample size determination in clinical trials. Journal of Biopharmaceutical Statistics 2002; 12(4): 535-51; Gittins JC, Pezeshk H. A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 2002; 36: 143-50; Kikuchi T, Pezeshk H, Gittins J. A Bayesian cost-benefit approach to the determination of sample size in clinical trials. Statistics in Medicine 2008; 27(1): 68-82; Kikuchi T, Gittins J. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety. Statistics in Medicine 2009; 28(18): 2293-306; Kikuchi T, Gittins J. A Bayesian procedure for cost-benefit evaluation of a new drug in multi-national clinical trials. Statistics in Medicine 2009 (Submitted)). The purpose of this article is to provide a rationale for experimental designs which allocate more patients to the new treatment than to the control group. The model uses a logistic weight function, including an interaction term linking efficacy and safety, which determines the number of patients choosing the new drug, and hence the resulting benefit. A Monte Carlo simulation is employed for the calculation. Having a larger group of patients on the new drug in general
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Sample size adjustment designs with time-to-event outcomes: A caution.
Freidlin, Boris; Korn, Edward L
2017-12-01
Sample size adjustment designs, which allow increasing the study sample size based on interim analysis of outcome data from a randomized clinical trial, have been increasingly promoted in the biostatistical literature. Although it is recognized that group sequential designs can be at least as efficient as sample size adjustment designs, many authors argue that a key advantage of these designs is their flexibility; interim sample size adjustment decisions can incorporate information and business interests external to the trial. Recently, Chen et al. (Clinical Trials 2015) considered sample size adjustment applications in the time-to-event setting using a design (CDL) that limits adjustments to situations where the interim results are promising. The authors demonstrated that while CDL provides little gain in unconditional power (versus fixed-sample-size designs), there is a considerable increase in conditional power for trials in which the sample size is adjusted. In time-to-event settings, sample size adjustment allows an increase in the number of events required for the final analysis. This can be achieved by either (a) following the original study population until the additional events are observed thus focusing on the tail of the survival curves or (b) enrolling a potentially large number of additional patients thus focusing on the early differences in survival curves. We use the CDL approach to investigate performance of sample size adjustment designs in time-to-event trials. Through simulations, we demonstrate that when the magnitude of the true treatment effect changes over time, interim information on the shape of the survival curves can be used to enrich the final analysis with events from the time period with the strongest treatment effect. In particular, interested parties have the ability to make the end-of-trial treatment effect larger (on average) based on decisions using interim outcome data. Furthermore, in "clinical null" cases where there is no
Bice, K.; Clement, S. C.
1981-01-01
X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology.
Brown, Caleb Marshall; Vavrek, Matthew J
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes.
[Inadequate treatment of affective disorders].
Bergsholm, P; Martinsen, E W; Holsten, F; Neckelmann, D; Aarre, T F
1992-08-30
Inadequate treatment of mood (affective) disorders is related to the mind/body dualism, desinformation about methods of treatment, the stigma of psychiatry, low funding of psychiatric research, low educational priority, and slow acquisition of new knowledge of psychiatry. The "respectable minority rule" has often been accepted without regard to the international expertise, and the consequences of undertreatment have not been weighed against the benefits of optimal treatment. The risk of chronicity increases with delayed treatment, and inadequately treated affective disorders are a leading cause of suicide. During the past 20 years the increase in suicide mortality in Norway has been the second largest in the world. Severe mood disorders are often misclassified as schizophrenia or other non-affective psychoses. Atypical mood disorders, notably rapid cycling and bipolar mixed states, are often diagnosed as personality, adjustment, conduct, attention deficit, or anxiety disorders, and even mental retardation. Neuroleptic drugs may suppress the most disturbing features of mood disorders, a fact often misinterpreted as supporting the diagnosis of a schizophrenia-like disorder. Treatment with neuroleptics is not sufficient, however, and serious side effects may often occur. The consequences are too often social break-down and post-depression syndrome.
Sample Size Determination for Estimation of Sensor Detection Probabilities Based on a Test Variable
National Research Council Canada - National Science Library
Oymak, Okan
2007-01-01
.... Army Yuma Proving Ground. Specifically, we evaluate the coverage probabilities and lengths of widely used confidence intervals for a binomial proportion and report the required sample sizes for some specified goals...
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Estimating sample size for a small-quadrat method of botanical ...
African Journals Online (AJOL)
... in eight plant communities in the Nylsvley Nature Reserve. Illustrates with a table. Keywords: Botanical surveys; Grass density; Grasslands; Mixed Bushveld; Nylsvley Nature Reserve; Quadrat size species density; Small-quadrat method; Species density; Species richness; botany; sample size; method; survey; south africa
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Sample size for equivalence trials: a case study from a vaccine lot consistency trial.
Ganju, Jitendra; Izu, Allen; Anemona, Alessandra
2008-08-30
For some trials, simple but subtle assumptions can have a profound impact on the size of the trial. A case in point is a vaccine lot consistency (or equivalence) trial. Standard sample size formulas used for designing lot consistency trials rely on only one component of variation, namely, the variation in antibody titers within lots. The other component, the variation in the means of titers between lots, is assumed to be equal to zero. In reality, some amount of variation between lots, however small, will be present even under the best manufacturing practices. Using data from a published lot consistency trial, we demonstrate that when the between-lot variation is only 0.5 per cent of the total variation, the increase in the sample size is nearly 300 per cent when compared with the size assuming that the lots are identical. The increase in the sample size is so pronounced that in order to maintain power one is led to consider a less stringent criterion for demonstration of lot consistency. The appropriate sample size formula that is a function of both components of variation is provided. We also discuss the increase in the sample size due to correlated comparisons arising from three pairs of lots as a function of the between-lot variance.
Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.
2018-04-01
Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.
Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.
2011-01-01
The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...
Liu, P T
2001-04-01
The conventional sample-size equations based on either the precision of estimation or the power of testing a hypothesis may not be appropriate to determine sample size for a "diagnostic" testing problem, such as the eye irritant Draize test. When the animals' responses to chemical compounds are relatively uniform and extreme and the objective is to classify a compound as either irritant or nonirritant, the test using just two or three animals may be adequate.
Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size
ALWI, IDRUS
2011-01-01
The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size, 200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...
Sample size re-estimation incorporating prior information on a nuisance parameter.
Mütze, Tobias; Schmidli, Heinz; Friede, Tim
2017-11-27
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter-based sample size re-estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta-analytic-predictive approach. To incorporate external information into the sample size re-estimation, we propose to update the meta-analytic-predictive prior based on the results of the internal pilot study and to re-estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re-estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior-data conflict is present, incorporating external information into the sample size re-estimation improves the operating characteristics compared to the traditional approach. In the case of a prior-data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re-estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re-estimation, the potential gains should be balanced against the risks. Copyright © 2017 John Wiley & Sons, Ltd.
Sample size choices for XRCT scanning of highly unsaturated soil mixtures
Directory of Open Access Journals (Sweden)
Smith Jonathan C.
2016-01-01
Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.
Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence.
Directory of Open Access Journals (Sweden)
Gwowen Shieh
Full Text Available Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies.
Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence
Shieh, Gwowen
2016-01-01
Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies. PMID:27598468
A margin based approach to determining sample sizes via tolerance bounds.
Energy Technology Data Exchange (ETDEWEB)
Newcomer, Justin T.; Freeland, Katherine Elizabeth
2013-09-01
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Hu, Youna; Song, Peter X-K
2012-04-13
Quadratic inference functions (QIF) methodology is an important alternative to the generalized estimating equations (GEE) method in the longitudinal marginal model, as it offers higher estimation efficiency than the GEE when correlation structure is misspecified. The focus of this paper is on sample size determination and power calculation for QIF based on the Wald test in a marginal logistic model with covariates of treatment, time, and treatment-time interaction. We have made three contributions in this paper: (i) we derived formulas of sample size and power for QIF and compared their performance with those given by the GEE; (ii) we proposed an optimal scheme of sample size determination to overcome the difficulty of unknown true correlation matrix in the sense of minimal average risk; and (iii) we studied properties of both QIF and GEE sample size formulas in relation to the number of follow-up visits and found that the QIF gave more robust sample sizes than the GEE. Using numerical examples, we illustrated that without sacrificing statistical power, the QIF design leads to sample size saving and hence lower study cost in comparison with the GEE analysis. We conclude that the QIF analysis is appealing for longitudinal studies. Copyright © 2012 John Wiley & Sons, Ltd.
Guo, Jiin-Huarng; Chen, Hubert J; Luh, Wei-Ming
2011-11-01
The allocation of sufficient participants into different experimental groups for various research purposes under given constraints is an important practical problem faced by researchers. We address the problem of sample size determination between two independent groups for unequal and/or unknown variances when both the power and the differential cost are taken into consideration. We apply the well-known Welch approximate test to derive various sample size allocation ratios by minimizing the total cost or, equivalently, maximizing the statistical power. Two types of hypotheses including superiority/non-inferiority and equivalence of two means are each considered in the process of sample size planning. A simulation study is carried out and the proposed method is validated in terms of Type I error rate and statistical power. As a result, the simulation study reveals that the proposed sample size formulas are very satisfactory under various variances and sample size allocation ratios. Finally, a flowchart, tables, and figures of several sample size allocations are presented for practical reference. ©2011 The British Psychological Society.
Blinded sample size re-estimation for recurrent event data with time trends.
Schneider, S; Schmidli, H; Friede, T
2013-12-30
The use of an internal pilot study for blinded sample size re-estimation (BSSR) allows to reduce uncertainty on the appropriate sample size compared with conventional fixed sample size designs. Recently BSSR procedures for recurrent event data were proposed and investigated. These approaches assume treatment-specific constant event rates that might not always be appropriate as found in relapsing multiple sclerosis. On the basis of a proportional intensity frailty model, we propose methods for BSSR in situations where a time trend of the event rates is present. For the sample size planning and the final analysis standard negative binomial methods can be used, as long as the patient follow-up time is approximately equal in the treatment groups. To re-estimate the sample size at interim, however, a full likelihood analysis is necessary. Operating characteristics such as rejection probabilities and sample size distribution are evaluated in a simulation study motivated by a systematic review in relapsing multiple sclerosis. The key factors affecting the operating characteristics are the study duration and the length of the recruitment period. The proposed procedure for BSSR controls the type I error rate and maintains the desired power against misspecifications of the nuisance parameters. Copyright © 2013 John Wiley & Sons, Ltd.
A type of sample size design in cancer clinical trials for response rate estimation.
Liu, Junfeng
2011-01-01
During the early stage of cancer clinical trials, when it is not convenient to construct an explicit hypothesis testing, a study on a new therapy often calls for a response rate (p) estimation concurrently with or right before a typical phase II study. We consider a two-stage process, where the acquired information from Stage I (with a small sample size (m)) would be utilized for sample size (n) recommendation for Stage II study aiming for a more accurate estimation. Once a sample size design and a parameter estimation protocol are applied, we study the overall utility (cost-effectiveness) in connection with the cost due to patient recruitment and treatment as well as the loss due to mean squared error from parameter estimation. Two approaches will be investigated including the posterior mixture method (a Bayesian approach) and the empirical variance method (a frequentist approach). We also discuss response rate estimation under truncated parameter space using maximum likelihood estimation with regard to sample size and mean squared error. The profiles of p-specific expected sample size, mean squared error and risk under different approaches motivate us to introduce the concept of "admissible sample size (design)". Copyright © 2010 Elsevier Inc. All rights reserved.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Optimum sample size to estimate mean parasite abundance in fish parasite surveys
Directory of Open Access Journals (Sweden)
Shvydka S.
2018-03-01
Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Rambo, Robert P
2017-01-01
The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.
SMALL SAMPLE SIZE IN 2X2 CROSS OVER DESIGNS: CONDITIONS OF DETERMINATION
Directory of Open Access Journals (Sweden)
B SOLEYMANI
2001-09-01
Full Text Available Introduction. Determination of small sample size in some clinical trials is a matter of importance. In cross-over studies which are one types of clinical trials, the matter is more significant. In this article, the conditions in which determination of small sample size in cross-over studies are possible were considered, and the effect of deviation from normality on the matter has been shown. Methods. The present study has been done on such 2x2 cross-over studies that variable of interest is quantitative one and is measurable by ratio or interval scale. The method of consideration is based on use of variable and sample mean"s distributions, central limit theorem, method of sample size determination in two groups, and cumulant or moment generating function. Results. In normal variables or transferable to normal variables, there is no restricting factors other than significant level and power of the test for determination of sample size, but in the case of non-normal variables, it should be determined such large that guarantee the normality of sample mean"s distribution. Discussion. In such cross over studies that because of existence of theoretical base, few samples can be computed, one should not do it without taking applied worth of results into consideration. While determining sample size, in addition to variance, it is necessary to consider distribution of variable, particularly through its skewness and kurtosis coefficients. the more deviation from normality, the more need of samples. Since in medical studies most of the continuous variables are closed to normal distribution, a few number of samples often seems to be adequate for convergence of sample mean to normal distribution.
Page sample size in web accessibility testing: how many pages is enough?
Velleman, Eric Martin; van der Geest, Thea
2013-01-01
Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
Generating Random Samples of a Given Size Using Social Security Numbers.
Erickson, Richard C.; Brauchle, Paul E.
1984-01-01
The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)
A Bayesian predictive sample size selection design for single-arm exploratory clinical trials.
Teramukai, Satoshi; Daimon, Takashi; Zohar, Sarah
2012-12-30
The aim of an exploratory clinical trial is to determine whether a new intervention is promising for further testing in confirmatory clinical trials. Most exploratory clinical trials are designed as single-arm trials using a binary outcome with or without interim monitoring for early stopping. In this context, we propose a Bayesian adaptive design denoted as predictive sample size selection design (PSSD). The design allows for sample size selection following any planned interim analyses for early stopping of a trial, together with sample size determination before starting the trial. In the PSSD, we determine the sample size using the method proposed by Sambucini (Statistics in Medicine 2008; 27:1199-1224), which adopts a predictive probability criterion with two kinds of prior distributions, that is, an 'analysis prior' used to compute posterior probabilities and a 'design prior' used to obtain prior predictive distributions. In the sample size determination of the PSSD, we provide two sample sizes, that is, N and N(max) , using two types of design priors. At each interim analysis, we calculate the predictive probabilities of achieving a successful result at the end of the trial using the analysis prior in order to stop the trial in case of low or high efficacy (Lee et al., Clinical Trials 2008; 5:93-106), and we select an optimal sample size, that is, either N or N(max) as needed, on the basis of the predictive probabilities. We investigate the operating characteristics through simulation studies, and the PSSD retrospectively applies to a lung cancer clinical trial. (243) Copyright © 2012 John Wiley & Sons, Ltd.
Constrained statistical inference: sample-size tables for ANOVA and regression.
Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves
2014-01-01
Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and this is known as an (order) constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a pre-specified power (say, 0.80) for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30-50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., β1 > β2) results in a higher power than assigning a positive or a negative sign to the parameters (e.g., β1 > 0).
Directory of Open Access Journals (Sweden)
Stefanović Milena
2013-01-01
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
Power and sample size calculations for Mendelian randomization studies using one genetic instrument.
Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary
2013-08-01
Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.
Sample size re-estimation in paired comparative diagnostic accuracy studies with a binary response.
McCray, Gareth P J; Titman, Andrew C; Ghaneh, Paula; Lancaster, Gillian A
2017-07-14
The sample size required to power a study to a nominal level in a paired comparative diagnostic accuracy study, i.e. studies in which the diagnostic accuracy of two testing procedures is compared relative to a gold standard, depends on the conditional dependence between the two tests - the lower the dependence the greater the sample size required. A priori, we usually do not know the dependence between the two tests and thus cannot determine the exact sample size required. One option is to use the implied sample size for the maximal negative dependence, giving the largest possible sample size. However, this is potentially wasteful of resources and unnecessarily burdensome on study participants as the study is likely to be overpowered. A more accurate estimate of the sample size can be determined at a planned interim analysis point where the sample size is re-estimated. This paper discusses a sample size estimation and re-estimation method based on the maximum likelihood estimates, under an implied multinomial model, of the observed values of conditional dependence between the two tests and, if required, prevalence, at a planned interim. The method is illustrated by comparing the accuracy of two procedures for the detection of pancreatic cancer, one procedure using the standard battery of tests, and the other using the standard battery with the addition of a PET/CT scan all relative to the gold standard of a cell biopsy. Simulation of the proposed method illustrates its robustness under various conditions. The results show that the type I error rate of the overall experiment is stable using our suggested method and that the type II error rate is close to or above nominal. Furthermore, the instances in which the type II error rate is above nominal are in the situations where the lowest sample size is required, meaning a lower impact on the actual number of participants recruited. We recommend multinomial model maximum likelihood estimation of the conditional
Scott, Neil W; Fayers, Peter M; Aaronson, Neil K; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Gundy, Chad; Koller, Michael; Petersen, Morten A; Sprangers, Mirjam A G
2009-03-01
Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal logistic regression. Simulated data, representative of HRQoL scales with four-category items, were generated. The power and type I error rates of the DIF method were then investigated when, respectively, DIF was deliberately introduced and when no DIF was added. The sample size, scale length, floor effects (FEs) and significance level were varied. When there was no DIF, type I error rates were close to 5%. Detecting moderate uniform DIF in a two-item scale required a sample size of 300 per group for adequate (>80%) power. For longer scales, a sample size of 200 was adequate. Considerably larger sample sizes were required to detect nonuniform DIF, when there were extreme FEs or when a reduced type I error rate was required. The impact of the number of items in the scale was relatively small. Ordinal logistic regression successfully detects DIF for HRQoL instruments with short scales. Sample size guidelines are provided.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Directory of Open Access Journals (Sweden)
Malhotra Rajeev
2010-01-01
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
Sample size for estimating average trunk diameter and plant height in eucalyptus hybrids
Directory of Open Access Journals (Sweden)
Alberto Cargnelutti Filho
2016-01-01
Full Text Available ABSTRACT: In eucalyptus crops, it is important to determine the number of plants that need to be evaluated for a reliable inference of growth. The aim of this study was to determine the sample size needed to estimate average trunk diameter at breast height and plant height of inter-specific eucalyptus hybrids. In 6,694 plants of twelve inter-specific hybrids it was evaluated trunk diameter at breast height at three (DBH3 and seven years (DBH7 and tree height at seven years (H7 of age. The statistics: minimum, maximum, mean, variance, standard deviation, standard error, and coefficient of variation were calculated. The hypothesis of variance homogeneity was tested. The sample size was determined by re sampling with replacement of 10,000 re samples. There was an increase in the sample size from DBH3 to H7 and DBH7. A sample size of 16, 59 and 31 plants is adequate to estimate DBH3, DBH7 and H7 means, respectively, of inter-specific hybrids of eucalyptus, with amplitude of confidence interval of 95% equal to 20% of the estimated mean.
Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride
2015-08-01
Aluminum Nitride by GA Gazonas and JW McCauley Weapons and Materials Research Directorate, ARL JJ Guo, KM Reddy, A Hirata, T Fujita, and MW Chen...Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...their microscopic structure. In this study, we report a size induced brittle-to-ductile transition in single-crystal aluminum nitride (AlN). When the
Two-Stage Adaptive Optimal Design with Fixed First-Stage Sample Size
Directory of Open Access Journals (Sweden)
Adam Lane
2012-01-01
Full Text Available In adaptive optimal procedures, the design at each stage is an estimate of the optimal design based on all previous data. Asymptotics for regular models with fixed number of stages are straightforward if one assumes the sample size of each stage goes to infinity with the overall sample size. However, it is not uncommon for a small pilot study of fixed size to be followed by a much larger experiment. We study the large sample behavior of such studies. For simplicity, we assume a nonlinear regression model with normal errors. We show that the distribution of the maximum likelihood estimates converges to a scale mixture family of normal random variables. Then, for a one parameter exponential mean function we derive the asymptotic distribution of the maximum likelihood estimate explicitly and present a simulation to compare the characteristics of this asymptotic distribution with some commonly used alternatives.
[On the impact of sample size calculation and power in clinical research].
Held, Ulrike
2014-10-01
The aim of a clinical trial is to judge the efficacy of a new therapy or drug. In the planning phase of the study, the calculation of the necessary sample size is crucial in order to obtain a meaningful result. The study design, the expected treatment effect in outcome and its variability, power and level of significance are factors which determine the sample size. It is often difficult to fix these parameters prior to the start of the study, but related papers from the literature can be helpful sources for the unknown quantities. For scientific as well as ethical reasons it is necessary to calculate the sample size in advance in order to be able to answer the study question.
Species-genetic diversity correlations in habitat fragmentation can be biased by small sample sizes.
Nazareno, Alison G; Jump, Alistair S
2012-06-01
Predicted parallel impacts of habitat fragmentation on genes and species lie at the core of conservation biology, yet tests of this rule are rare. In a recent article in Ecology Letters, Struebig et al. (2011) report that declining genetic diversity accompanies declining species diversity in tropical forest fragments. However, this study estimates diversity in many populations through extrapolation from very small sample sizes. Using the data of this recent work, we show that results estimated from the smallest sample sizes drive the species-genetic diversity correlation (SGDC), owing to a false-positive association between habitat fragmentation and loss of genetic diversity. Small sample sizes are a persistent problem in habitat fragmentation studies, the results of which often do not fit simple theoretical models. It is essential, therefore, that data assessing the proposed SGDC are sufficient in order that conclusions be robust.
Information-based sample size re-estimation in group sequential design for longitudinal trials.
Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven
2014-09-28
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation. Copyright © 2014 John Wiley & Sons, Ltd.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased
Overestimation of test performance by ROC analysis: Effect of small sample size
International Nuclear Information System (INIS)
Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.
1984-01-01
New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described
Mesh-size effects on drift sample composition as determined with a triple net sampler
Slack, K.V.; Tilley, L.J.; Kennelly, S.S.
1991-01-01
Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.
Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.
Hanel, Paul H P; Haase, Jennifer
2017-01-01
In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.
Richter, Veronika; Muche, Rainer; Mayer, Benjamin
2018-01-26
Statistical sample size calculation is a crucial part of planning nonhuman animal experiments in basic medical research. The 3R principle intends to reduce the number of animals to a sufficient minimum. When planning experiments, one may consider the impact of less rigorous assumptions during sample size determination as it might result in a considerable reduction in the number of required animals. Sample size calculations conducted for 111 biometrical reports were repeated. The original effect size assumptions remained unchanged, but the basic properties (type 1 error 5%, two-sided hypothesis, 80% power) were varied. The analyses showed that a less rigorous assumption on the type 1 error level (one-sided 5% instead of two-sided 5%) was associated with a savings potential of 14% regarding the original number of required animals. Animal experiments are predominantly exploratory studies. In light of the demonstrated potential reduction in the numbers of required animals, researchers should discuss whether less rigorous assumptions during the process of sample size calculation may be reasonable for the purpose of optimizing the number of animals in experiments according to the 3R principle.
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Bayesian sample size determination for cost-effectiveness studies with censored data.
Directory of Open Access Journals (Sweden)
Daniel P Beavers
Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.
Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size.
Directory of Open Access Journals (Sweden)
Nathan J Stevenson
Full Text Available Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7-40.0 across a range of AED protocols, Td and trial AED efficacy (p<0.001. RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9-11.9; p<0.001. Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7-2.9; p<0.001. Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4-3.0 compared to trials in normothermic neonates (p<0.001. These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size.
A simulation study of sample size for multilevel logistic regression models
Directory of Open Access Journals (Sweden)
Moineddin Rahim
2007-07-01
Full Text Available Abstract Background Many studies conducted in health and social sciences collect individual level data as outcome measures. Usually, such data have a hierarchical structure, with patients clustered within physicians, and physicians clustered within practices. Large survey data, including national surveys, have a hierarchical or clustered structure; respondents are naturally clustered in geographical units (e.g., health regions and may be grouped into smaller units. Outcomes of interest in many fields not only reflect continuous measures, but also binary outcomes such as depression, presence or absence of a disease, and self-reported general health. In the framework of multilevel studies an important problem is calculating an adequate sample size that generates unbiased and accurate estimates. Methods In this paper simulation studies are used to assess the effect of varying sample size at both the individual and group level on the accuracy of the estimates of the parameters and variance components of multilevel logistic regression models. In addition, the influence of prevalence of the outcome and the intra-class correlation coefficient (ICC is examined. Results The results show that the estimates of the fixed effect parameters are unbiased for 100 groups with group size of 50 or higher. The estimates of the variance covariance components are slightly biased even with 100 groups and group size of 50. The biases for both fixed and random effects are severe for group size of 5. The standard errors for fixed effect parameters are unbiased while for variance covariance components are underestimated. Results suggest that low prevalent events require larger sample sizes with at least a minimum of 100 groups and 50 individuals per group. Conclusion We recommend using a minimum group size of 50 with at least 50 groups to produce valid estimates for multi-level logistic regression models. Group size should be adjusted under conditions where the prevalence
Beerli, Peter
2004-04-01
Current estimators of gene flow come in two methods; those that estimate parameters assuming that the populations investigated are a small random sample of a large number of populations and those that assume that all populations were sampled. Maximum likelihood or Bayesian approaches that estimate the migration rates and population sizes directly using coalescent theory can easily accommodate datasets that contain a population that has no data, a so-called 'ghost' population. This manipulation allows us to explore the effects of missing populations on the estimation of population sizes and migration rates between two specific populations. The biases of the inferred population parameters depend on the magnitude of the migration rate from the unknown populations. The effects on the population sizes are larger than the effects on the migration rates. The more immigrants from the unknown populations that are arriving in the sample populations the larger the estimated population sizes. Taking into account a ghost population improves or at least does not harm the estimation of population sizes. Estimates of the scaled migration rate M (migration rate per generation divided by the mutation rate per generation) are fairly robust as long as migration rates from the unknown populations are not huge. The inclusion of a ghost population does not improve the estimation of the migration rate M; when the migration rates are estimated as the number of immigrants Nm then a ghost population improves the estimates because of its effect on population size estimation. It seems that for 'real world' analyses one should carefully choose which populations to sample, but there is no need to sample every population in the neighbourhood of a population of interest.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
International Nuclear Information System (INIS)
Reer, B.
2004-01-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Study of Radon and Thoron exhalation from soil samples of different grain sizes.
Chitra, N; Danalakshmi, B; Supriya, D; Vijayalakshmi, I; Sundar, S Bala; Sivasubramanian, K; Baskaran, R; Jose, M T
2018-03-01
The exhalation of radon ( 222 Rn) and thoron ( 220 Rn) from a porous matrix depends on the emanation of them from the grains by the recoil effect. The emanation factor is a quantitative estimate of the emanation phenomenon. The present study is to investigate the effect of grain size of the soil matrix on the emanation factor. Soil samples from three different locations were fractionated into different grain size categories ranging from <0.1 to 2mm. The emanation factors of each of the grain size range were estimated by measuring the mass exhalation rates of radon and thoron and the activity concentrations of 226 Ra and 232 Th. The emanation factor was found to increase with decrease in grain size. This effect was made evident by keeping the parent radium concentration constant for all grain size fractions. The governing factor is the specific surface area of the soil samples which increases with decrease in grain size. Copyright © 2017 Elsevier Ltd. All rights reserved.
Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill
2017-01-01
Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
Sufficient Sample Sizes for Discrete-Time Survival Analysis Mixture Models
Moerbeek, Mirjam
2014-01-01
Long-term survivors in trials with survival endpoints are subjects who will not experience the event of interest. Membership in the class of long-term survivors is unobserved and should be inferred from the data by means of a mixture model. An important question is how large the sample size should
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling
Czech Academy of Sciences Publication Activity Database
Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír
2015-01-01
Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015
B-graph sampling to estimate the size of a hidden population
Spreen, M.; Bogaerts, S.
2015-01-01
Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is
Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions
Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.
2013-01-01
Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of
Bolton tooth size ratio among Sudanese Population sample: A preliminary study.
Abdalla Hashim, Ala'a Hayder; Eldin, Al-Hadi Mohi; Hashim, Hayder Abdalla
2015-01-01
The study of the mesiodistal size, the morphology of teeth and dental arch may play an important role in clinical dentistry, as well as other sciences such as Forensic Dentistry and Anthropology. The aims of the present study were to establish tooth-size ratio in Sudanese sample with Class I normal occlusion, to compare the tooth-size ratio between the present study and Bolton's study and between genders. The sample consisted of dental casts of 60 subjects (30 males and 30 females). Bolton formula was used to compute the overall and anterior ratio. The correlation coefficient between the anterior ratio and overall ratio was tested, and Student's t-test was used to compare tooth-size ratios between males and females, and between the present study and Bolton's result. The results of the overall and anterior ratio was relatively similar to the mean values reported by Bolton, and there were no statistically significant differences between the mean values of the anterior ratio and the overall ratio between males and females. The correlation coefficient was (r = 0.79). The result obtained was similar to the Caucasian race. However, the reality indicates that the Sudanese population consisted of different racial groups; therefore, the firm conclusion is difficult to draw. Since this sample is not representative for the Sudanese population, hence, a further study with a large sample collected from the different parts of the Sudan is required.
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
International Nuclear Information System (INIS)
Sampson, T.E.
1991-01-01
Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts
Jarvis, Nicholas; Larsbo, Mats; Koestel, John; Keck, Hannes
2017-04-01
The long-range connectivity of macropore networks may exert a strong control on near-saturated and saturated hydraulic conductivity and the occurrence of preferential flow through soil. It has been suggested that percolation concepts may provide a suitable theoretical framework to characterize and quantify macropore connectivity, although this idea has not yet been thoroughly investigated. We tested the applicability of percolation concepts to describe macropore networks quantified by X-ray scanning at a resolution of 0.24 mm in eighteen cylinders (20 cm diameter and height) sampled from the ploughed layer of four soils of contrasting texture in east-central Sweden. The analyses were performed for sample sizes ("regions of interest", ROI) varying between 3 and 12 cm in cube side-length and for minimum pore thicknesses ranging between image resolution and 1 mm. Finite sample size effects were clearly found for ROI's of cube side-length smaller than ca. 6 cm. For larger sample sizes, the results showed the relevance of percolation concepts to soil macropore networks, with a close relationship found between imaged porosity and the fraction of the pore space which percolated (i.e. was connected from top to bottom of the ROI). The percolating fraction increased rapidly as a function of porosity above a small percolation threshold (1-4%). This reflects the ordered nature of the pore networks. The percolation relationships were similar for all four soils. Although pores larger than 1 mm appeared to be somewhat better connected, only small effects of minimum pore thickness were noted across the range of tested pore sizes. The utility of percolation concepts to describe the connectivity of more anisotropic macropore networks (e.g. in subsoil horizons) should also be tested, although with current X-ray scanning equipment it may prove difficult in many cases to analyze sufficiently large samples that would avoid finite size effects.
Model choice and sample size in item response theory analysis of aphasia tests.
Hula, William D; Fergadiotis, Gerasimos; Martin, Nadine
2012-05-01
The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from individuals with aphasia were analyzed, and the resulting item and person estimates were used to develop simulated test data for 3 sample size conditions. The simulated data were analyzed using a standard 1-parameter logistic (1-PL) model and 3 models that accounted for the influence of guessing: augmented 1-PL and 2-PL models and a 3-PL model. The model estimates obtained from the simulated data were compared to their known true values. With small and medium sample sizes, an augmented 1-PL model was the most accurate at recovering the known item and person parameters; however, no model performed well at any sample size. Follow-up simulations confirmed that the large influence of guessing and the extreme easiness of the items contributed substantially to the poor estimation of item difficulty and person ability. Incorporating the assumption of guessing into IRT models improves parameter estimation accuracy, even for small samples. However, caution should be exercised in interpreting scores obtained from easy 2-choice tests, regardless of whether IRT modeling or percentage correct scoring is used.
Size selective isocyanate aerosols personal air sampling using porous plastic foams
Khanh Huynh, Cong; Duc, Trinh Vu
2009-02-01
As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.
10Be measurements at MALT using reduced-size samples of bulk sediments
Horiuchi, Kazuho; Oniyanagi, Itsumi; Wasada, Hiroshi; Matsuzaki, Hiroyuki
2013-01-01
In order to establish 10Be measurements on reduced-size (1-10 mg) samples of bulk sediments, we investigated four different pretreatment designs using lacustrine and marginal-sea sediments and the AMS system of the Micro Analysis Laboratory, Tandem accelerator (MALT) at The University of Tokyo. The 10Be concentrations obtained from the samples of 1-10 mg agreed within a precision of 3-5% with the values previously determined using corresponding ordinary-size (∼200 mg) samples and the same AMS system. This fact demonstrates reliable determinations of 10Be with milligram levels of recent bulk sediments at MALT. On the other hand, a clear decline of the BeO- beam with tens of micrograms of 9Be carrier suggests that the combination of ten milligrams of sediments and a few hundred micrograms of the 9Be carrier is more convenient at this stage.
{sup 10}Be measurements at MALT using reduced-size samples of bulk sediments
Energy Technology Data Exchange (ETDEWEB)
Horiuchi, Kazuho, E-mail: kh@cc.hirosaki-u.ac.jp [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Oniyanagi, Itsumi [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Wasada, Hiroshi [Institute of Geology and Paleontology, Graduate school of Science, Tohoku University, 6-3, Aramaki Aza-Aoba, Aoba-ku, Sendai 980-8578 (Japan); Matsuzaki, Hiroyuki [MALT, School of Engineering, University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-0032 (Japan)
2013-01-15
In order to establish {sup 10}Be measurements on reduced-size (1-10 mg) samples of bulk sediments, we investigated four different pretreatment designs using lacustrine and marginal-sea sediments and the AMS system of the Micro Analysis Laboratory, Tandem accelerator (MALT) at University of Tokyo. The {sup 10}Be concentrations obtained from the samples of 1-10 mg agreed within a precision of 3-5% with the values previously determined using corresponding ordinary-size ({approx}200 mg) samples and the same AMS system. This fact demonstrates reliable determinations of {sup 10}Be with milligram levels of recent bulk sediments at MALT. On the other hand, a clear decline of the BeO{sup -} beam with tens of micrograms of {sup 9}Be carrier suggests that the combination of ten milligrams of sediments and a few hundred micrograms of the {sup 9}Be carrier is more convenient at this stage.
Size selective isocyanate aerosols personal air sampling using porous plastic foams
International Nuclear Information System (INIS)
Cong Khanh Huynh; Trinh Vu Duc
2009-01-01
As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.
A COMPUTATIONAL TOOL TO EVALUATE THE SAMPLE SIZE IN MAP POSITIONAL ACCURACY
Directory of Open Access Journals (Sweden)
Marcelo Antonio Nero
Full Text Available Abstract: In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n, considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map and a user risk (to accept a bad map. This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n and calculate the associated risk. Then we changed the value of (n, using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future.
PIXE–PIGE analysis of size-segregated aerosol samples from remote areas
Energy Technology Data Exchange (ETDEWEB)
Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)
2014-01-01
The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.
Sample size calculation for microarray experiments with blocked one-way design
Directory of Open Access Journals (Sweden)
Jung Sin-Ho
2009-05-01
Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Significance, Errors, Power, and Sample Size: The Blocking and Tackling of Statistics.
Mascha, Edward J; Vetter, Thomas R
2018-02-01
Inferential statistics relies heavily on the central limit theorem and the related law of large numbers. According to the central limit theorem, regardless of the distribution of the source population, a sample estimate of that population will have a normal distribution, but only if the sample is large enough. The related law of large numbers holds that the central limit theorem is valid as random samples become large enough, usually defined as an n ≥ 30. In research-related hypothesis testing, the term "statistically significant" is used to describe when an observed difference or association has met a certain threshold. This significance threshold or cut-point is denoted as alpha (α) and is typically set at .05. When the observed P value is less than α, one rejects the null hypothesis (Ho) and accepts the alternative. Clinical significance is even more important than statistical significance, so treatment effect estimates and confidence intervals should be regularly reported. A type I error occurs when the Ho of no difference or no association is rejected, when in fact the Ho is true. A type II error occurs when the Ho is not rejected, when in fact there is a true population effect. Power is the probability of detecting a true difference, effect, or association if it truly exists. Sample size justification and power analysis are key elements of a study design. Ethical concerns arise when studies are poorly planned or underpowered. When calculating sample size for comparing groups, 4 quantities are needed: α, type II error, the difference or effect of interest, and the estimated variability of the outcome variable. Sample size increases for increasing variability and power, and for decreasing α and decreasing difference to detect. Sample size for a given relative reduction in proportions depends heavily on the proportion in the control group itself, and increases as the proportion decreases. Sample size for single-group studies estimating an unknown parameter
Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G
2010-11-01
Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.
Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.
Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang
2018-02-01
To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.
Crystallite size variation of TiO2 samples depending time heat treatment
International Nuclear Information System (INIS)
Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.
2016-01-01
Titanium dioxide (TiO 2 ) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO 2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)
A contemporary decennial global Landsat sample of changing agricultural field sizes
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by
Evaluating the performance of species richness estimators: sensitivity to sample grain size
DEFF Research Database (Denmark)
Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara
2006-01-01
and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3. Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
International Nuclear Information System (INIS)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.
2013-01-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
Energy Technology Data Exchange (ETDEWEB)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-07-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Schlesinger, P.
2007-01-01
Roč. 52, č. 1 (2007), s. 423-437 ISSN 0167-9473 R&D Projects: GA AV ČR 1ET400300415; GA MŠk LC536 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear discriminant analysis * numerical aspects of FLDA * small sample size problem * dimension reduction * sparsity Subject RIV: BA - General Mathematics Impact factor: 1.029, year: 2007
Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.
2012-01-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the populatio...
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
Sample size and power determination when limited preliminary information is available
Directory of Open Access Journals (Sweden)
Christine E. McLaren
2017-04-01
Full Text Available Abstract Background We describe a novel strategy for power and sample size determination developed for studies utilizing investigational technologies with limited available preliminary data, specifically of imaging biomarkers. We evaluated diffuse optical spectroscopic imaging (DOSI, an experimental noninvasive imaging technique that may be capable of assessing changes in mammographic density. Because there is significant evidence that tamoxifen treatment is more effective at reducing breast cancer risk when accompanied by a reduction of breast density, we designed a study to assess the changes from baseline in DOSI imaging biomarkers that may reflect fluctuations in breast density in premenopausal women receiving tamoxifen. Method While preliminary data demonstrate that DOSI is sensitive to mammographic density in women about to receive neoadjuvant chemotherapy for breast cancer, there is no information on DOSI in tamoxifen treatment. Since the relationship between magnetic resonance imaging (MRI and DOSI has been established in previous studies, we developed a statistical simulation approach utilizing information from an investigation of MRI assessment of breast density in 16 women before and after treatment with tamoxifen to estimate the changes in DOSI biomarkers due to tamoxifen. Results Three sets of 10,000 pairs of MRI breast density data with correlation coefficients of 0.5, 0.8 and 0.9 were simulated and generated and were used to simulate and generate a corresponding 5,000,000 pairs of DOSI values representing water, ctHHB, and lipid. Minimum sample sizes needed per group for specified clinically-relevant effect sizes were obtained. Conclusion The simulation techniques we describe can be applied in studies of other experimental technologies to obtain the important preliminary data to inform the power and sample size calculations.
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The Effect of Sterilization on Size and Shape of Fat Globules in Model Processed Cheese Samples
Directory of Open Access Journals (Sweden)
B. Tremlová
2006-01-01
Full Text Available Model cheese samples from 4 independent productions were heat sterilized (117 °C, 20 minutes after the melting process and packing with an aim to prolong their durability. The objective of the study was to assess changes in the size and shape of fat globules due to heat sterilization by using image analysis methods. The study included a selection of suitable methods of preparation mounts, taking microphotographs and making overlays for automatic processing of photographs by image analyser, ascertaining parameters to determine the size and shape of fat globules and statistical analysis of results obtained. The results of the experiment suggest that changes in shape of fat globules due to heat sterilization are not unequivocal. We found that the size of fat globules was significantly increased (p < 0.01 due to heat sterilization (117 °C, 20 min, and the shares of small fat globules (up to 500 μm2, or 100 μm2 in the samples of heat sterilized processed cheese were decreased. The results imply that the image analysis method is very useful when assessing the effect of technological process on the quality of processed cheese quality.
Zeestraten, Eva; Lambert, Christian; Chis Ster, Irina; Williams, Owen A; Lawrence, Andrew J; Patel, Bhavini; MacKinnon, Andrew D; Barrick, Thomas R; Markus, Hugh S
2016-01-01
Detecting treatment efficacy using cognitive change in trials of cerebral small vessel disease (SVD) has been challenging, making the use of surrogate markers such as magnetic resonance imaging (MRI) attractive. We determined the sensitivity of MRI to change in SVD and used this information to calculate sample size estimates for a clinical trial. Data from the prospective SCANS (St George’s Cognition and Neuroimaging in Stroke) study of patients with symptomatic lacunar stroke and confluent leukoaraiosis was used (n = 121). Ninety-nine subjects returned at one or more time points. Multimodal MRI and neuropsychologic testing was performed annually over 3 years. We evaluated the change in brain volume, T2 white matter hyperintensity (WMH) volume, lacunes, and white matter damage on diffusion tensor imaging (DTI). Over 3 years, change was detectable in all MRI markers but not in cognitive measures. WMH volume and DTI parameters were most sensitive to change and therefore had the smallest sample size estimates. MRI markers, particularly WMH volume and DTI parameters, are more sensitive to SVD progression over short time periods than cognition. These markers could significantly reduce the size of trials to screen treatments for efficacy in SVD, although further validation from longitudinal and intervention studies is required. PMID:26036939
Methodology for sample preparation and size measurement of commercial ZnO nanoparticles
Directory of Open Access Journals (Sweden)
Pei-Jia Lu
2018-04-01
Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology
International Nuclear Information System (INIS)
Sampson, T.E.
1991-01-01
Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-absorption in particulates or lumps of special nuclear material in the sample. Another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, an enormous savings considering the expense of multiple calibration standard sets otherwise needed. This report presents calculations of the bias resulting from not using this new formalism. The calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts. This paper describes this attenuation-correction-factor formalism in more detail and illustrates the magnitude of the biases that may arise if it is not used. 5 refs., 7 figs
Hemery, Lenaïg G; Politano, Kristin K; Henkel, Sarah K
2017-08-01
With increasing cascading effects of climate change on the marine environment, as well as pollution and anthropogenic utilization of the seafloor, there is increasing interest in tracking changes to benthic communities. Macrofaunal surveys are traditionally conducted as part of pre-incident environmental assessment studies and post-incident monitoring studies when there is a potential impact to the seafloor. These surveys usually characterize the structure and/or spatiotemporal distribution of macrofaunal assemblages collected with sediment cores; however, many different sampling protocols have been used. An assessment of the comparability of past and current survey methods was in need to facilitate future surveys and comparisons. This was the aim of the present study, conducted off the Oregon coast in waters 25-35 m deep. Our results show that the use of a sieve with a 1.0-mm mesh size gives results for community structure comparable to results obtained from a 0.5-mm mesh size, which allows reliable comparisons of recent and past spatiotemporal surveys of macroinfauna. In addition to our primary objective of comparing methods, we also found interacting effects of seasons and depths of collection. Seasonal differences (summer and fall) were seen in infaunal assemblages in the wave-induced sediment motion zone but not deeper. Thus, studies where wave-induced sediment motion can structure the benthic communities, especially during the winter months, should consider this effect when making temporal comparisons. In addition, some macrofauna taxa-like polychaetes and amphipods show high interannual variabilities, so spatiotemporal studies should make sure to cover several years before drawing any conclusions.
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F. T.; Hinde, J.; Grossman, J.N.
2005-01-01
smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions
International Nuclear Information System (INIS)
John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.
2000-01-01
Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was
Sample size effect on the determination of the irreversibility line of high-Tc superconductors
International Nuclear Information System (INIS)
Li, Q.; Suenaga, M.; Li, Q.; Freltoft, T.
1994-01-01
The irreversibility lines of a high-J c superconducting Bi 2 Sr 2 Ca 2 Cu 3 O x /Ag tape were systematically measured upon a sequence of subdivisions of the sample. The irreversibility field H r (T) (parallel to the c axis) was found to change approximately as L 0.13 , where L is the effective dimension of the superconducting tape. Furthermore, it was found that the irreversibility line for a grain-aligned Bi 2 Sr 2 Ca 2 Cu 3 O x specimen can be approximately reproduced by the extrapolation of this relation down to a grain size of a few tens of micrometers. The observed size effect could significantly obscure the real physical meaning of the irreversibility lines. In addition, this finding surprisingly indicated that the Bi 2 Sr 2 Ca 2 Cu 2 O x /Ag tape and grain-aligned specimen may have similar flux line pinning strength
Influence of secular trends and sample size on reference equations for lung function tests.
Quanjer, P H; Stocks, J; Cole, T J; Hall, G L; Stanojevic, S
2011-03-01
The aim of our study was to determine the contribution of secular trends and sample size to lung function reference equations, and establish the number of local subjects required to validate published reference values. 30 spirometry datasets collected between 1978 and 2009 provided data on healthy, white subjects: 19,291 males and 23,741 females aged 2.5-95 yrs. The best fit for forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC as functions of age, height and sex were derived from the entire dataset using GAMLSS. Mean z-scores were calculated for individual datasets to determine inter-centre differences. This was repeated by subdividing one large dataset (3,683 males and 4,759 females) into 36 smaller subsets (comprising 18-227 individuals) to preclude differences due to population/technique. No secular trends were observed and differences between datasets comprising >1,000 subjects were small (maximum difference in FEV(1) and FVC from overall mean: 0.30- -0.22 z-scores). Subdividing one large dataset into smaller subsets reproduced the above sample size-related differences and revealed that at least 150 males and 150 females would be necessary to validate reference values to avoid spurious differences due to sampling error. Use of local controls to validate reference equations will rarely be practical due to the numbers required. Reference equations derived from large or collated datasets are recommended.
Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A.; Yan, Juan; Dottorini, Tania; Ellis, Keith A.; Winterlich, Anthony
2018-01-01
Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F-score 91%–97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%–93% and F-score 88%–95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs. PMID:29515862
Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size
Directory of Open Access Journals (Sweden)
Zhihua Wang
2014-01-01
Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.
Sample-size calculations for multi-group comparison in population pharmacokinetic experiments.
Ogungbenro, Kayode; Aarons, Leon
2010-01-01
This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi-group comparison detecting the difference in parameters between groups under mixed-effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.
Sample Size Estimation for Detection of Splicing Events in Transcriptome Sequencing Data.
Kaisers, Wolfgang; Schwender, Holger; Schaal, Heiner
2017-09-05
Merging data from multiple samples is required to detect low expressed transcripts or splicing events that might be present only in a subset of samples. However, the exact number of required replicates enabling the detection of such rare events often remains a mystery but can be approached through probability theory. Here, we describe a probabilistic model, relating the number of observed events in a batch of samples with observation probabilities. Therein, samples appear as a heterogeneous collection of events, which are observed with some probability. The model is evaluated in a batch of 54 transcriptomes of human dermal fibroblast samples. The majority of putative splice-sites (alignment gap-sites) are detected in (almost) all samples or only sporadically, resulting in an U-shaped pattern for observation probabilities. The probabilistic model systematically underestimates event numbers due to a bias resulting from finite sampling. However, using an additional assumption, the probabilistic model can predict observed event numbers within a events (mean 7122 in alignments from TopHat alignments and 86,215 in alignments from STAR). We conclude that the probabilistic model provides an adequate description for observation of gap-sites in transcriptome data. Thus, the calculation of required sample sizes can be done by application of a simple binomial model to sporadically observed random events. Due to the large number of uniquely observed putative splice-sites and the known stochastic noise in the splicing machinery, it appears advisable to include observation of rare splicing events into analysis objectives. Therefore, it is beneficial to take scores for the validation of gap-sites into account.
Sample size for estimating average trunk diameter and plant height in eucalyptus hybrids
Alberto Cargnelutti Filho; Rafael Beltrame; Dilson Antônio Bisognin; Marília Lazarotto; Clovis Roberto Haselein; Darci Alberto Gatto; Gleison Augusto dos Santos
2016-01-01
ABSTRACT: In eucalyptus crops, it is important to determine the number of plants that need to be evaluated for a reliable inference of growth. The aim of this study was to determine the sample size needed to estimate average trunk diameter at breast height and plant height of inter-specific eucalyptus hybrids. In 6,694 plants of twelve inter-specific hybrids it was evaluated trunk diameter at breast height at three (DBH3) and seven years (DBH7) and tree height at seven years (H7) of age. The ...
Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples
Energy Technology Data Exchange (ETDEWEB)
Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)
2010-01-15
In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.
An Updated Survey on Statistical Thresholding and Sample Size of fMRI Studies
Directory of Open Access Journals (Sweden)
Andy W. K. Yeung
2018-01-01
Full Text Available Background: Since the early 2010s, the neuroimaging field has paid more attention to the issue of false positives. Several journals have issued guidelines regarding statistical thresholds. Three papers have reported the statistical analysis of the thresholds used in fMRI literature, but they were published at least 3 years ago and surveyed papers published during 2007–2012. This study revisited this topic to evaluate the changes in this field.Methods: The PubMed database was searched to identify the task-based (not resting-state fMRI papers published in 2017 and record their sample sizes, inferential methods (e.g., voxelwise or clusterwise, theoretical methods (e.g., parametric or non-parametric, significance level, cluster-defining primary threshold (CDT, volume of analysis (whole brain or region of interest and software used.Results: The majority (95.6% of the 388 analyzed articles reported statistics corrected for multiple comparisons. A large proportion (69.6% of the 388 articles reported main results by clusterwise inference. The analyzed articles mostly used software Statistical Parametric Mapping (SPM, Analysis of Functional NeuroImages (AFNI, or FMRIB Software Library (FSL to conduct statistical analysis. There were 70.9%, 37.6%, and 23.1% of SPM, AFNI, and FSL studies, respectively, that used a CDT of p ≤ 0.001. The statistical sample size across the articles ranged between 7 and 1,299 with a median of 33. Sample size did not significantly correlate with the level of statistical threshold.Conclusion: There were still around 53% (142/270 studies using clusterwise inference that chose a more liberal CDT than p = 0.001 (n = 121 or did not report their CDT (n = 21, down from around 61% reported by Woo et al. (2014. For FSL studies, it seemed that the CDT practice had no improvement since the survey by Woo et al. (2014. A few studies chose unconventional CDT such as p = 0.0125 or 0.004. Such practice might create an impression that the
Strategies for informed sample size reduction in adaptive controlled clinical trials
Arandjelović, Ognjen
2017-12-01
Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.
Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden
Energy Technology Data Exchange (ETDEWEB)
Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)
2008-12-15
The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.
Directory of Open Access Journals (Sweden)
Ya Li
2012-12-01
Full Text Available In order to prepare a high capacity packing material for solid-phase extraction with specific recognition ability of trace ractopamine in biological samples, uniformly-sized, molecularly imprinted polymers (MIPs were prepared by a multi-step swelling and polymerization method using methacrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker, and toluene as a porogen respectively. Scanning electron microscope and specific surface area were employed to identify the characteristics of MIPs. Ultraviolet spectroscopy, Fourier transform infrared spectroscopy, Scatchard analysis and kinetic study were performed to interpret the specific recognition ability and the binding process of MIPs. The results showed that, compared with other reports, MIPs synthetized in this study showed high adsorption capacity besides specific recognition ability. The adsorption capacity of MIPs was 0.063Â mmol/g at 1Â mmol/L ractopamine concentration with the distribution coefficient 1.70. The resulting MIPs could be used as solid-phase extraction materials for separation and enrichment of trace ractopamine in biological samples. Keywords: Ractopamine, Uniformly-sized molecularly imprinted polymers, Solid-phase extraction, Multi-step swelling and polymerization, Separation and enrichment
Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.
2012-01-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956
Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M
2012-06-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved
Friede, Tim; Kieser, Meinhard
2013-01-01
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.
Dong, H.; Zhang, H.; Zuo, Y.; Gao, P.; Ye, G.
2018-01-01
Mercury intrusion porosimetry (MIP) measurements are widely used to determine pore throat size distribution (PSD) curves of porous materials. The pore throat size of porous materials has been used to estimate their compressive strength and air permeability. However, the effect of sample size on
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Degirmenci, B.; Haktanir, A.; Albayrak, R.; Acar, M.; Sahin, D.A.; Sahin, O.; Yucel, A.; Caliskan, G.
2007-01-01
Aim: To evaluate the effects of sonographic characteristics of thyroid nodules, the diameter of needle used for sampling, and sampling technique on obtaining sufficient cytological material (SCM). Materials and methods: We performed sonography-guided fine-needle biopsy (FNB) in 232 solid thyroid nodules. Size-, echogenicity, vascularity, and localization of all nodules were evaluated by Doppler sonography before the biopsy. Needles of size 20, 22, and 24 G were used for biopsy. The biopsy specimen was acquired using two different methods after localisation. In first method, the needle tip was advanced into the nodule in various positions using a to-and-fro motion whilst in the nodule, along with concurrent aspiration. In the second method, the needle was advanced vigorously using a to-and-fro motion within the nodule whilst being rotated on its axis (capillary-action technique). Results: The mean nodule size was 2.1 ± 1.3 cm (range 0.4-7.2 cm). SCM was acquired from 154 (66.4%) nodules by sonography-guided FNB. In 78 (33.6%) nodules, SCM could not be collected. There was no significant difference between nodules with different echogenicity and vascularity for SCM. Regarding the needle size, the lowest rate of SCM was obtained using 20 G needles (56.6%) and the highest rate of adequate material was obtained using 24 G needles (82.5%; p = 0.001). The SCM rate was 76.9% with the capillary-action technique versus 49.4% with the aspiration technique (p < 0.001). Conclusion: Selecting finer needles (24-25 G) for sonography-guided FNB of thyroid nodules and using the capillary-action technique decreased the rate of inadequate material in cytological examination
Feng, Hao-chuan; Zhang, Wei; Zhu, Yu-liang; Lei, Zhi-yi; Ji, Xiao-mei
2017-06-01
Particle size distributions (PSDs) of bottom sediments in a coastal zone are generally multimodal due to the complexity of the dynamic environment. In this paper, bottom sediments along the deep channel of the Pearl River Estuary (PRE) are used to understand the multimodal PSDs' characteristics and the corresponding depositional environment. The results of curve-fitting analysis indicate that the near-bottom sediments in the deep channel generally have a bimodal distribution with a fine component and a relatively coarse component. The particle size distribution of bimodal sediment samples can be expressed as the sum of two lognormal functions and the parameters for each component can be determined. At each station of the PRE, the fine component makes up less volume of the sediments and is relatively poorly sorted. The relatively coarse component, which is the major component of the sediments, is even more poorly sorted. The interrelations between the dynamics and particle size of the bottom sediment in the deep channel of the PRE have also been investigated by the field measurement and simulated data. The critical shear velocity and the shear velocity are calculated to study the stability of the deep channel. The results indicate that the critical shear velocity has a similar distribution over large part of the deep channel due to the similar particle size distribution of sediments. Based on a comparison between the critical shear velocities derived from sedimentary parameters and the shear velocities obtained by tidal currents, it is likely that the depositional area is mainly distributed in the northern part of the channel, while the southern part of the deep channel has to face higher erosion risk.
Saccenti, Edoardo; Timmerman, Marieke E.
2016-01-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investi-gated using multivariate statistical
Foley, Brett Patrick
2010-01-01
The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using…
Effects of sample size on estimation of rainfall extremes at high temperatures
Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik
2017-09-01
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Mixed modeling and sample size calculations for identifying housekeeping genes.
Dai, Hongying; Charnigo, Richard; Vyhlidal, Carrie A; Jones, Bridgette L; Bhandary, Madhusudan
2013-08-15
Normalization of gene expression data using internal control genes that have biologically stable expression levels is an important process for analyzing reverse transcription polymerase chain reaction data. We propose a three-way linear mixed-effects model to select optimal housekeeping genes. The mixed-effects model can accommodate multiple continuous and/or categorical variables with sample random effects, gene fixed effects, systematic effects, and gene by systematic effect interactions. We propose using the intraclass correlation coefficient among gene expression levels as the stability measure to select housekeeping genes that have low within-sample variation. Global hypothesis testing is proposed to ensure that selected housekeeping genes are free of systematic effects or gene by systematic effect interactions. A gene combination with the highest lower bound of 95% confidence interval for intraclass correlation coefficient and no significant systematic effects is selected for normalization. Sample size calculation based on the estimation accuracy of the stability measure is offered to help practitioners design experiments to identify housekeeping genes. We compare our methods with geNorm and NormFinder by using three case studies. A free software package written in SAS (Cary, NC, U.S.A.) is available at http://d.web.umkc.edu/daih under software tab. Copyright © 2013 John Wiley & Sons, Ltd.
Effects of sample size on estimation of rainfall extremes at high temperatures
Directory of Open Access Journals (Sweden)
B. Boessenkool
2017-09-01
Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Bayesian assurance and sample size determination in the process validation life-cycle.
Faya, Paul; Seaman, John W; Stamey, James D
2017-01-01
Validation of pharmaceutical manufacturing processes is a regulatory requirement and plays a key role in the assurance of drug quality, safety, and efficacy. The FDA guidance on process validation recommends a life-cycle approach which involves process design, qualification, and verification. The European Medicines Agency makes similar recommendations. The main purpose of process validation is to establish scientific evidence that a process is capable of consistently delivering a quality product. A major challenge faced by manufacturers is the determination of the number of batches to be used for the qualification stage. In this article, we present a Bayesian assurance and sample size determination approach where prior process knowledge and data are used to determine the number of batches. An example is presented in which potency uniformity data is evaluated using a process capability metric. By using the posterior predictive distribution, we simulate qualification data and make a decision on the number of batches required for a desired level of assurance.
A bootstrap test for comparing two variances: simulation of size and power in small samples.
Sun, Jiajing; Chernick, Michael R; LaBudde, Robert A
2011-11-01
An F statistic was proposed by Good and Chernick ( 1993 ) in an unpublished paper, to test the hypothesis of the equality of variances from two independent groups using the bootstrap; see Hall and Padmanabhan ( 1997 ), for a published reference where Good and Chernick ( 1993 ) is discussed. We look at various forms of bootstrap tests that use the F statistic to see whether any or all of them maintain the nominal size of the test over a variety of population distributions when the sample size is small. Chernick and LaBudde ( 2010 ) and Schenker ( 1985 ) showed that bootstrap confidence intervals for variances tend to provide considerably less coverage than their theoretical asymptotic coverage for skewed population distributions such as a chi-squared with 10 degrees of freedom or less or a log-normal distribution. The same difficulties may be also be expected when looking at the ratio of two variances. Since bootstrap tests are related to constructing confidence intervals for the ratio of variances, we simulated the performance of these tests when the population distributions are gamma(2,3), uniform(0,1), Student's t distribution with 10 degrees of freedom (df), normal(0,1), and log-normal(0,1) similar to those used in Chernick and LaBudde ( 2010 ). We find, surprisingly, that the results for the size of the tests are valid (reasonably close to the asymptotic value) for all the various bootstrap tests. Hence we also conducted a power comparison, and we find that bootstrap tests appear to have reasonable power for testing equivalence of variances.
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M
2013-02-01
Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Shaukat S. Shahid
2016-06-01
Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.
Li, Ya; Fu, Qiang; Liu, Meng; Jiao, Yuan-Yuan; Du, Wei; Yu, Chong; Liu, Jing; Chang, Chun; Lu, Jian
2012-01-01
In order to prepare a high capacity packing material for solid-phase extraction with specific recognition ability of trace ractopamine in biological samples, uniformly-sized, molecularly imprinted polymers (MIPs) were prepared by a multi-step swelling and polymerization method using methacrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker, and toluene as a porogen respectively. Scanning electron microscope and specific surface area were employed to identify the characteristics of MIPs. Ultraviolet spectroscopy, Fourier transform infrared spectroscopy, Scatchard analysis and kinetic study were performed to interpret the specific recognition ability and the binding process of MIPs. The results showed that, compared with other reports, MIPs synthetized in this study showed high adsorption capacity besides specific recognition ability. The adsorption capacity of MIPs was 0.063 mmol/g at 1 mmol/L ractopamine concentration with the distribution coefficient 1.70. The resulting MIPs could be used as solid-phase extraction materials for separation and enrichment of trace ractopamine in biological samples. PMID:29403774
Directory of Open Access Journals (Sweden)
Łącka Katarzyna
2016-03-01
Full Text Available Introduction: The aim of this study was to propose the optimal methodology for stallion semen morphology analysis while taking into consideration the staining method, the microscopic techniques, and the workload generated by a number of samples. Material and Methods: Ejaculates from eight pure-bred Arabian horses were tested microscopically for the incidence of morphological defects in the spermatozoa. Two different staining methods (eosin-nigrosin and eosin-gentian dye, two different techniques of microscopic analysis (1000× and 400× magnifications, and two sample sizes (200 and 500 spermatozoa were used. Results: Well-formed spermatozoa and those with major and minor defects according to Blom’s classification were identified. The applied staining methods gave similar results and could be used in stallion sperm morphology analysis. However, the eosin-nigrosin method was more recommendable, because it allowed to limit the number of visible artefacts without hindering the identification of protoplasm drops and enables the differentiation of living and dead spermatozoa. Conclusion: The applied microscopic techniques proved to be equally efficacious. Therefore, it is practically possible to opt for the simpler and faster 400x technique of analysing sperm morphology to examine stallion semen. We also found that the number of spermatozoa clearly affects the results of sperm morphology evaluation. Reducing the number of spermatozoa from 500 to 200 causes a decrease in the percentage of spermatozoa identified as normal and an increase in the percentage of spermatozoa determined as morphologically defective.
Weighted piecewise LDA for solving the small sample size problem in face verification.
Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis
2007-03-01
A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.
DEFF Research Database (Denmark)
Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten
2009-01-01
/CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...
Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples.
Lozano-Ramos, Inés; Bancu, Ioana; Oliveira-Tercero, Anna; Armengol, María Pilar; Menezes-Neto, Armando; Del Portillo, Hernando A; Lauzurica-Valdemoros, Ricardo; Borràs, Francesc E
2015-01-01
Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs) may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC) as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9). The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM) and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate-polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm-Horsfall protein) were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.
Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples
Directory of Open Access Journals (Sweden)
Inés Lozano-Ramos
2015-05-01
Full Text Available Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9. The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm–Horsfall protein were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.
Clark, Timothy; Berger, Ursula; Mansmann, Ulrich
2013-03-21
To assess the completeness of reporting of sample size determinations in unpublished research protocols and to develop guidance for research ethics committees and for statisticians advising these committees. Review of original research protocols. Unpublished research protocols for phase IIb, III, and IV randomised clinical trials of investigational medicinal products submitted to research ethics committees in the United Kingdom during 1 January to 31 December 2009. Completeness of reporting of the sample size determination, including the justification of design assumptions, and disagreement between reported and recalculated sample size. 446 study protocols were reviewed. Of these, 190 (43%) justified the treatment effect and 213 (48%) justified the population variability or survival experience. Only 55 (12%) discussed the clinical importance of the treatment effect sought. Few protocols provided a reasoned explanation as to why the design assumptions were plausible for the planned study. Sensitivity analyses investigating how the sample size changed under different design assumptions were lacking; six (1%) protocols included a re-estimation of the sample size in the study design. Overall, 188 (42%) protocols reported all of the information to accurately recalculate the sample size; the assumed withdrawal or dropout rate was not given in 177 (40%) studies. Only 134 of the 446 (30%) sample size calculations could be accurately reproduced. Study size tended to be over-estimated rather than under-estimated. Studies with non-commercial sponsors justified the design assumptions used in the calculation more often than studies with commercial sponsors but less often reported all the components needed to reproduce the sample size calculation. Sample sizes for studies with non-commercial sponsors were less often reproduced. Most research protocols did not contain sufficient information to allow the sample size to be reproduced or the plausibility of the design assumptions to
Italian retail gasoline activities: inadequate distribution network
International Nuclear Information System (INIS)
Verde, Stefano
2005-01-01
It is common belief that competition in the Italian retail gasoline activities is hindered by oil companies' collusive behaviour. However, when developing a broader analysis of the sector, low efficiency and scarce competition could results as the consequences coming from an inadequate distribution network and from the recognition of international markets and focal point [it
Barriers to Mammography among Inadequately Screened Women
Stoll, Carolyn R. T.; Roberts, Summer; Cheng, Meng-Ru; Crayton, Eloise V.; Jackson, Sherrill; Politi, Mary C.
2015-01-01
Mammography use has increased over the past 20 years, yet more than 30% of women remain inadequately screened. Structural barriers can deter individuals from screening, however, cognitive, emotional, and communication barriers may also prevent mammography use. This study sought to identify the impact of number and type of barriers on mammography…
Radiologists' responses to inadequate referrals
Energy Technology Data Exchange (ETDEWEB)
Lysdahl, Kristin Bakke [Oslo University College, Faculty of Health Sciences, Oslo (Norway); University of Oslo, Section for Medical Ethics, Faculty of Medicine, P.O. Box 1130, Blindern, Oslo (Norway); Hofmann, Bjoern Morten [University of Oslo, Section for Medical Ethics, Faculty of Medicine, P.O. Box 1130, Blindern, Oslo (Norway); Gjoevik University College, Faculty of Health Care and Nursing, Gjoevik (Norway); Espeland, Ansgar [Haukeland University Hospital, Department of Radiology, Bergen (Norway); University of Bergen, Section for Radiology, Department of Surgical Sciences, Bergen (Norway)
2010-05-15
To investigate radiologists' responses to inadequate imaging referrals. A survey was mailed to Norwegian radiologists; 69% responded. They graded the frequencies of actions related to referrals with ambiguous indications or inappropriate examination choices and the contribution of factors preventing and not preventing an examination of doubtful usefulness from being performed as requested. Ninety-five percent (344/361) reported daily or weekly actions related to inadequate referrals. Actions differed among subspecialties. The most frequent were contacting the referrer to clarify the clinical problem and checking test results/information in the medical records. Both actions were more frequent among registrars than specialists and among hospital radiologists than institute radiologists. Institute radiologists were more likely to ask the patient for additional information and to examine the patient clinically. Factors rated as contributing most to prevent doubtful examinations were high risk of serious complications/side effects, high radiation dose and low patient age. Factors facilitating doubtful examinations included respect for the referrer's judgment, patient/next-of-kin wants the examination, patient has arrived, unreachable referrer, and time pressure. In summary, radiologists facing inadequate referrals considered patient safety and sought more information. Vetting referrals on arrival, easier access to referring clinicians, and time for radiologists to handle inadequate referrals may contribute to improved use of imaging. (orig.)
Financial incentives are inadequate for most companies
Indian Academy of Sciences (India)
Financial incentives are inadequate for most companies. market far less lucrative than for other diseases, which results in chronic underinvestment; reduced investment in TB drug R&D,. Pfizer withdrawal from TB R&D; AstraZeneca abandon TB R&D & close site; Novartis pull out; 4/22 Big Pharma producing antibacterials ...
Lihong, Huang; Jianling, Bai; Hao, Yu; Feng, Chen
2017-06-20
Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distribution, an expectation-maximization (EM) algorithm of the hazard ratio was derived, and several simulation studies were used to verify its applications. The method had obvious variation in the hazard ratio estimates and overestimation for the relatively small hazard ratios. Our studies showed that the stability of the EM estimation results directly correlated with the sample size, the convergence of the EM algorithm was impacted by the initial values, and a balanced design produced the best estimates. No reliable blinded sample size re-estimation inference can be made in our studies, but the results provide useful information to steer the practitioners in this field from repeating the same endeavor..
DEFF Research Database (Denmark)
Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.
2013-01-01
measure of the data heterogeneity, has been used to modify formulae for individual sample size estimation. However, subgroups of animals sharing common characteristics, may exhibit excessively less or more heterogeneity. Hence, sample size estimates based on the ICC may not achieve the desired precision...... and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...
Directory of Open Access Journals (Sweden)
Elsa Tavernier
Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Tang, Yongqiang
2015-01-01
A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.
Directory of Open Access Journals (Sweden)
Christopher Ryan Penton
2016-06-01
Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.
Evidence Report: Risk Factor of Inadequate Nutrition
Smith, Scott M.; Zwart, Sara R.; Heer, Martina
2015-01-01
The importance of nutrition in exploration has been documented repeatedly throughout history, where, for example, in the period between Columbus' voyage in 1492 and the invention of the steam engine, scurvy resulted in more sailor deaths than all other causes of death combined. Because nutrients are required for the structure and function of every cell and every system in the body, defining the nutrient requirements for spaceflight and ensuring provision and intake of those nutrients are primary issues for crew health and mission success. Unique aspects of nutrition during space travel include the overarching physiological adaptation to weightlessness, psychological adaptation to extreme and remote environments, and the ability of nutrition and nutrients to serve as countermeasures to ameliorate the negative effects of spaceflight on the human body. Key areas of clinical concern for long-duration spaceflight include loss of body mass (general inadequate food intake), bone and muscle loss, cardiovascular and immune system decrements, increased radiation exposure and oxidative stress, vision and ophthalmic changes, behavior and performance, nutrient supply during extravehicular activity, and general depletion of body nutrient stores because of inadequate food supply, inadequate food intake, increased metabolism, and/or irreversible loss of nutrients. These topics are reviewed herein, based on the current gap structure.
Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws
Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.
2009-04-01
Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Sample size requirements to detect gene-environment interactions in genome-wide association studies.
Murcray, Cassandra E; Lewinger, Juan Pablo; Conti, David V; Thomas, Duncan C; Gauderman, W James
2011-04-01
Many complex diseases are likely to be a result of the interplay of genes and environmental exposures. The standard analysis in a genome-wide association study (GWAS) scans for main effects and ignores the potentially useful information in the available exposure data. Two recently proposed methods that exploit environmental exposure information involve a two-step analysis aimed at prioritizing the large number of SNPs tested to highlight those most likely to be involved in a GE interaction. For example, Murcray et al. ([2009] Am J Epidemiol 169:219–226) proposed screening on a test that models the G-E association induced by an interaction in the combined case-control sample. Alternatively, Kooperberg and LeBlanc ([2008] Genet Epidemiol 32:255–263) suggested screening on genetic marginal effects. In both methods, SNPs that pass the respective screening step at a pre-specified significance threshold are followed up with a formal test of interaction in the second step. We propose a hybrid method that combines these two screening approaches by allocating a proportion of the overall genomewide significance level to each test. We show that the Murcray et al. approach is often the most efficient method, but that the hybrid approach is a powerful and robust method for nearly any underlying model. As an example, for a GWAS of 1 million markers including a single true disease SNP with minor allele frequency of 0.15, and a binary exposure with prevalence 0.3, the Murcray, Kooperberg and hybrid methods are 1.90, 1.27, and 1.87 times as efficient, respectively, as the traditional case-control analysis to detect an interaction effect size of 2.0.
Duncanson, L.; Dubayah, R.
2015-12-01
Lidar remote sensing is widely applied for mapping forest carbon stocks, and technological advances have improved our ability to capture structural details from forests, even resolving individual trees. Despite these advancements, the accuracy of forest aboveground biomass models remains limited by the quality of field estimates of biomass. The accuracies of field estimates are inherently dependent on the accuracy of the allometric equations used to relate measurable attributes to biomass. These equations are calibrated with relatively small samples of often spatially clustered trees. This research focuses on one of many issues involving allometric equations - understanding how sensitive allometric parameters are to the sample sizes used to fit them. We capitalize on recent advances in lidar remote sensing to extract individual tree structural information from six high-resolution airborne lidar datasets in the United States. We remotely measure millions of tree heights and crown radii, and fit allometric equations to the relationship between tree height and radius at a 'population' level, in each site. We then extract samples from our tree database, and build allometries on these smaller samples of trees, with varying sample sizes. We show that for the allometric relationship between tree height and crown radius, small sample sizes produce biased allometric equations that overestimate height for a given crown radius. We extend this analysis using translations from the literature to address potential implications for biomass, showing that site-level biomass may be greatly overestimated when applying allometric equations developed with the typically small sample sizes used in popular allometric equations for biomass.
Hoefgen, Barbara; Schulze, Thomas G; Ohlraun, Stephanie; von Widdern, Olrik; Höfels, Susanne; Gross, Magdalena; Heidmann, Vivien; Kovalenko, Svetlana; Eckermann, Anita; Kölsch, Heike; Metten, Martin; Zobel, Astrid; Becker, Tim; Nöthen, Markus M; Propping, Peter; Heun, Reinhard; Maier, Wolfgang; Rietschel, Marcella
2005-02-01
Several lines of evidence indicate that abnormalities in the functioning of the central serotonergic system are involved in the pathogenesis of affective illness. A 44-base-pair insertion/deletion polymorphism in the 5' regulatory region of the serotonin transporter gene (5-HTTLPR), which influences expression of the serotonin transporter, has been the focus of intensive research since an initial report on an association between 5-HTTLPR and depression-related personality traits. Consistently replicated evidence for an involvement of this polymorphism in the etiology of mood disorders, particularly in major depressive disorder (MDD), remains scant. We assessed a potential association between 5-HTTLPR and MDD, using the largest reported sample to date (466 patients, 836 control subjects). Individuals were all of German descent. Patients were systematically recruited from consecutive inpatient admissions. Control subjects were drawn from random lists of the local Census Bureau and screened for psychiatric disorders. The short allele of 5-HTTLPR was significantly more frequent in patients than in control subjects (45.5% vs. 39.9%; p = .006; odds ratio = 1.26). These results support an involvement of 5-HTTLPR in the etiology of MDD. They also demonstrate that the detection of small genetic effects requires very large and homogenous samples.
Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick
2015-04-17
Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.
International Nuclear Information System (INIS)
Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.
2010-01-01
Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2013-04-15
Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.
Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L
2014-01-01
The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit
Grain size of loess and paleosol samples: what are we measuring?
Varga, György; Kovács, János; Szalai, Zoltán; Újvári, Gábor
2017-04-01
Particle size falling into a particularly narrow range is among the most important properties of windblown mineral dust deposits. Therefore, various aspects of aeolian sedimentation and post-depositional alterations can be reconstructed only from precise grain size data. Present study is aimed at (1) reviewing grain size data obtained from different measurements, (2) discussing the major reasons for disagreements between data obtained by frequently applied particle sizing techniques, and (3) assesses the importance of particle shape in particle sizing. Grain size data of terrestrial aeolian dust deposits (loess and paleosoil) were determined by laser scattering instruments (Fritsch Analysette 22 Microtec Plus, Horiba Partica La-950 v2 and Malvern Mastersizer 3000 with a Hydro Lv unit), while particles size and shape distributions were acquired by Malvern Morphologi G3-ID. Laser scattering results reveal that the optical parameter settings of the measurements have significant effects on the grain size distributions, especially for the fine-grained fractions (slide with a consistent orientation with their largest area facing to the camera. However, this is only one outcome of infinite possible projections of a three-dimensional object and it cannot be regarded as a representative one. The third (height) dimension of the particles remains unknown, so the volume-based weightings are fairly dubious in the case of platy particles. Support of the National Research, Development and Innovation Office (Hungary) under contract NKFI 120620 is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.
DEFF Research Database (Denmark)
Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.
2008-01-01
OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... information presented in the protocol and the publication. RESULTS: Only 11/62 trials described existing sample size calculations fully and consistently in both the protocol and the publication. The method of handling protocol deviations was described in 37 protocols and 43 publications. The method...
Bi, Ran; Liu, Peng
2016-03-31
RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J
2009-06-01
The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.
Diel differences in 0+ fish samples: effect of river size and habitat
Czech Academy of Sciences Publication Activity Database
Janáč, Michal; Jurajda, Pavel
2013-01-01
Roč. 29, č. 1 (2013), s. 90-98 ISSN 1535-1459 R&D Projects: GA MŠk LC522 Institutional research plan: CEZ:AV0Z60930519 Keywords : young-of-the-year fish * diurnal * nocturnal * habitat complexity * stream size Subject RIV: EG - Zoology Impact factor: 1.971, year: 2013
General power and sample size calculations for high-dimensional genomic data
van Iterson, M.; van de Wiel, M.; Boer, J.M.; Menezes, R.
2013-01-01
In the design of microarray or next-generation sequencing experiments it is crucial to choose the appropriate number of biological replicates. As often the number of differentially expressed genes and their effect sizes are small and too few replicates will lead to insufficient power to detect
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Family size, birth order, and intelligence in a large South American sample.
Velandia, W; Grandon, G M; Page, E B
1978-01-01
The confluence theory, which hypothesizes a relationship between intellectual development birth order, and family size, was examined in a colombian study of more than 36,000 college applicants. The results of the study did not support the confluence theory. The confluence theory states that the intellectual development of a child is related to average mental age of the members of his family at the time of his birth. The mental age of the parents is always assigned a value of 30 and siblings are given scores equivalent to their chronological age at the birth of the subject. Therefore, the average mental age of family members for a 1st born child is 30, or 60 divided by 2. If a subject is born into a family consisting of 2 parents and a 6-year old sibling, the average mental age of family members tends, therefore, to decrease with each birth order. The hypothesis derived from the confluence theory states that there is a positive relationship between average mental age of a subject's family and the subject's performance on intelligence tests. In the Colombian study, data on family size, birth order and socioeconomic status was derived from college application forms. Intelligence test scores for each subject was obtained from college entrance exams. The mental age of each applicant's family at the time of the applicant's birth was calculated. Multiple correlation analysis and path analysis were used to assess the relationship. Results were 1) the test scores of subjects from families with 2,3,4, and 5 children were higher than test scores of the 1st born subjects; 2) the rank order of intelligence by family size was 3,4,5,2,6,1 instead of the hypothesized 1,2,3,4,5,6; and 3) only 1% of the variability in test scores was explained by the variables of birth order and family size. Further analysis indicated that socioeconomic status was a far more powerful explanatory variable than family size.
Directory of Open Access Journals (Sweden)
Cynthia Stretch
Full Text Available Top differentially expressed gene lists are often inconsistent between studies and it has been suggested that small sample sizes contribute to lack of reproducibility and poor prediction accuracy in discriminative models. We considered sex differences (69♂, 65 ♀ in 134 human skeletal muscle biopsies using DNA microarray. The full dataset and subsamples (n = 10 (5 ♂, 5 ♀ to n = 120 (60 ♂, 60 ♀ thereof were used to assess the effect of sample size on the differential expression of single genes, gene rank order and prediction accuracy. Using our full dataset (n = 134, we identified 717 differentially expressed transcripts (p<0.0001 and we were able predict sex with ~90% accuracy, both within our dataset and on external datasets. Both p-values and rank order of top differentially expressed genes became more variable using smaller subsamples. For example, at n = 10 (5 ♂, 5 ♀, no gene was considered differentially expressed at p<0.0001 and prediction accuracy was ~50% (no better than chance. We found that sample size clearly affects microarray analysis results; small sample sizes result in unstable gene lists and poor prediction accuracy. We anticipate this will apply to other phenotypes, in addition to sex.
Boef, Anna G C; Dekkers, Olaf M; Vandenbroucke, Jan P; le Cessie, Saskia
2014-11-01
Instrumental variable (IV) analysis is promising for estimation of therapeutic effects from observational data as it can circumvent unmeasured confounding. However, even if IV assumptions hold, IV analyses will not necessarily provide an estimate closer to the true effect than conventional analyses as this depends on the estimates' bias and variance. We investigated how estimates from standard regression (ordinary least squares [OLS]) and IV (two-stage least squares) regression compare on mean squared error (MSE). We derived an equation for approximation of the threshold sample size, above which IV estimates have a smaller MSE than OLS estimates. Next, we performed simulations, varying sample size, instrument strength, and level of unmeasured confounding. IV assumptions were fulfilled by design. Although biased, OLS estimates were closer on average to the true effect than IV estimates at small sample sizes because of their smaller variance. The threshold sample size above which IV analysis outperforms OLS regression depends on instrument strength and strength of unmeasured confounding but will usually be large given the typical moderate instrument strength in medical research. IV methods are of most value in large studies if considerable unmeasured confounding is likely and a strong and plausible instrument is available. Copyright © 2014 Elsevier Inc. All rights reserved.
Heymann, D.; Lakatos, S.; Walton, J. R.
1973-01-01
Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Hans T. Schreuder; Jin-Mann S. Lin; John Teply
2000-01-01
The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...
Tang, Yongqiang
2017-05-25
We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.
Basic distribution free identification tests for small size samples of environmental data
International Nuclear Information System (INIS)
Federico, A.G.; Musmeci, F.
1998-01-01
Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it
Measuring proteins with greater speed and resolution while reducing sample size
Hsieh, Vincent H.; Wyatt, Philip J.
2017-01-01
A multi-angle light scattering (MALS) system, combined with chromatographic separation, directly measures the absolute molar mass, size and concentration of the eluate species. The measurement of these crucial properties in solution is essential in basic macromolecular characterization and all research and production stages of bio-therapeutic products. We developed a new MALS methodology that has overcome the long-standing, stubborn barrier to microliter-scale peak volumes and achieved the hi...
Flaw-size measurement in a weld samples by ultrasonic frequency analysis
International Nuclear Information System (INIS)
Adler, L.; Cook, K.V.; Whaley, H.L. Jr.; McClung, R.W.
1975-01-01
An ultrasonic frequency-analysis technique was developed and applies to characterize flaws in an 8-in. (203-mm) thick heavy-section steel weld specimen. The technique applies a multitransducer system. The spectrum of the received broad-band signal is frequency analyzed at two different receivers for each of the flaws. From the two spectra, the size and orientation of the flaw are determined by the use of an analytic model proposed earlier. (auth)
Buss, Daniel F; Borges, Erika L
2008-01-01
This study is part of the effort to test and to establish Rapid Bioassessment Protocols (RBP) using benthic macroinvertebrates as indicators of the water quality of wadeable streams in south-east Brazil. We compared the cost-effectiveness of sampling devices frequently used in RBPs, Surber and Kick-net samplers, and of three mesh sizes (125, 250 and 500 microm). A total of 126,815 benthic macroinvertebrates were collected, representing 57 families. Samples collected with Kick method had significantly higher richness and BMWP scores in relation to Surber, but no significant increase in the effort, measured by the necessary time to process samples. No significant differences were found between samplers considering the cost/effectiveness ratio. Considering mesh sizes, significantly higher abundance and time for processing samples were necessary for finer meshes, but no significant difference were found considering taxa richness or BMWP scores. As a consequence, the 500 microm mesh had better cost/effectiveness ratios. Therefore, we support the use of a kick-net with a mesh size of 500 microm for macroinvertebrate sampling in RBPs using family level in streams of similar characteristics in Brazil.
International Nuclear Information System (INIS)
John L. Bowen; Rowena Gonzalez; David S. Shafer
2001-01-01
As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site
Olberg, Britta; Perleth, Matthias; Felgentraeger, Katja; Schulz, Sandra; Busse, Reinhard
2017-01-01
The aim of this study was to assess the quality of reporting sample size calculation and underlying design assumptions in pivotal trials of high-risk medical devices (MDs) for neurological conditions. Systematic review of research protocols for publicly registered randomized controlled trials (RCTs). In the absence of a published protocol, principal investigators were contacted for additional data. To be included, trials had to investigate a high-risk MD, registered between 2005 and 2015, with indications stroke, headache disorders, and epilepsy as case samples within central nervous system diseases. Extraction of key methodological parameters for sample size calculation was performed independently and peer-reviewed. In a final sample of seventy-one eligible trials, we collected data from thirty-one trials. Eighteen protocols were obtained from the public domain or principal investigators. Data availability decreased during the extraction process, with almost all data available for stroke-related trials. Of the thirty-one trials with sample size information available, twenty-six reported a predefined calculation and underlying assumptions. Justification was given in twenty and evidence for parameter estimation in sixteen trials. Estimates were most often based on previous research, including RCTs and observational data. Observational data were predominantly represented by retrospective designs. Other references for parameter estimation indicated a lower level of evidence. Our systematic review of trials on high-risk MDs confirms previous research, which has documented deficiencies regarding data availability and a lack of reporting on sample size calculation. More effort is needed to ensure both relevant sources, that is, original research protocols, to be publicly available and reporting requirements to be standardized.
Usami, Satoshi
2014-12-01
Recent years have shown increased awareness of the importance of sample size determination in experimental research. Yet effective and convenient methods for sample size determination, especially in longitudinal experimental design, are still under development, and application of power analysis in applied research remains limited. This article presents a convenient method for sample size determination in longitudinal experimental research using a multilevel model. A fundamental idea of this method is transformation of model parameters (level 1 error variance [σ(2)], level 2 error variances [τ 00, τ 11] and its covariance [τ 01, τ 10], and a parameter representing experimental effect [δ]) into indices (reliability of measurement at the first time point [ρ 1], effect size at the last time point [Δ T ], proportion of variance of outcomes between the first and the last time points [k], and level 2 error correlation [r]) that are intuitively understandable and easily specified. To foster more convenient use of power analysis, numerical tables are constructed that refer to ANOVA results to investigate the influence on statistical power by respective indices.
Size selectivity of standardized multimesh gillnets in sampling coarse European species
Czech Academy of Sciences Publication Activity Database
Prchalová, Marie; Kubečka, Jan; Říha, Milan; Mrkvička, Tomáš; Vašek, Mojmír; Jůza, Tomáš; Kratochvíl, Michal; Peterka, Jiří; Draštík, Vladislav; Křížek, J.
2009-01-01
Roč. 96, č. 1 (2009), s. 51-57 ISSN 0165-7836. [Fish Stock Assessment Methods for Lakes and Reservoirs: Towards the true picture of fish stock. České Budějovice, 11.09.2007-15.09.2007] R&D Projects: GA AV ČR(CZ) 1QS600170504; GA ČR(CZ) GA206/07/1392 Institutional research plan: CEZ:AV0Z60170517 Keywords : gillnet * seine * size selectivity * roach * perch * rudd Subject RIV: EH - Ecology, Behaviour Impact factor: 1.531, year: 2009
Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav
2018-04-01
Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (<0.17 μm) PM fractions were collected by high volume cascade impactor in Prague city center. Particles were examined using electron microscopy and their elemental composition was determined by energy dispersive X-ray spectroscopy. Larger or smaller particles, not corresponding to the impaction cut points, were found in all fractions, as they occur in agglomerates and are impacted according to their aerodynamic diameter. Elemental composition of particles in size-segregated fractions varied significantly. Ns-soot occurred in all size fractions. Metallic nanospheres were found in accumulation fractions, but not in ultrafine fraction where ns-soot, carbonaceous particles, and inorganic salts were identified. Dynamic light scattering was used to measure particle size distribution in water and in cell culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.
Sampled-data L-infinity smoothing: fixed-size ARE solution with free hold function
Meinsma, Gjerrit; Mirkin, Leonid
The problem of estimating an analog signal from its noisy sampled measurements is studied in the L-infinity (induced L2-norm) framework. The main emphasis is placed on relaxing causality requirements. Namely, it is assumed that l future measurements are available to the estimator, which corresponds
Tu, Xinjun; Du, Xiaoxia; Singh, Vijay P.; Chen, Xiaohong; Du, Yiliang; Li, Kun
2017-11-01
Constructing a joint distribution of low flows between the donor and recipient basins and analyzing their joint risk are commonly required for implementing interbasin water transfer. In this study, daily streamflow data of bi-basin low flows were sampled at window sizes from 3 to183 days by using the annual minimum method. The stationarity of low flows was tested by a change point analysis and non-stationary low flows were reconstructed by using the moving mean method. Three bivariate Archimedean copulas and five common univariate distributions were applied to fit the joint and marginal distributions of bi-basin low flows. Then, by considering the window size of sampling low flows under environmental change, the change in the joint risk of interbasin water transfer was investigated. Results showed that the non-stationarity of low flows in the recipient basin at all window sizes was significant due to the regulation of water reservoirs. The general extreme value distribution was found to fit the marginal distributions of bi-basin low flows. Three Archimedean copulas satisfactorily fitted the joint distribution of bi-basin low flows and then the Frank copula was found to be the comparatively better. The moving mean method differentiated the location parameter of the GEV distribution, but did not differentiate the scale and shape parameters, and the copula parameters. Due to environmental change, in particular the regulation of water reservoirs in the recipient basin, the decrease of the joint synchronous risk of bi-basin water shortage was slight, but those of the synchronous assurance of water transfer from the donor were remarkable. With the enlargement of window size of sampling low flows, both the joint synchronous risk of bi-basin water shortage, and the joint synchronous assurance of water transfer from the donor basin when there was a water shortage in the recipient basin exhibited a decreasing trend, but their changes were with a slight fluctuation, in
In situ detection of small-size insect pests sampled on traps using multifractal analysis
Xia, Chunlei; Lee, Jang-Myung; Li, Yan; Chung, Bu-Keun; Chon, Tae-Soo
2012-02-01
We introduce a multifractal analysis for detecting the small-size pest (e.g., whitefly) images from a sticky trap in situ. An automatic attraction system is utilized for collecting pests from greenhouse plants. We applied multifractal analysis to segment action of whitefly images based on the local singularity and global image characteristics. According to the theory of multifractal dimension, the candidate blobs of whiteflies are initially defined from the sticky-trap image. Two schemes, fixed thresholding and regional minima obtainment, were utilized for feature extraction of candidate whitefly image areas. The experiment was conducted with the field images in a greenhouse. Detection results were compared with other adaptive segmentation algorithms. Values of F measuring precision and recall score were higher for the proposed multifractal analysis (96.5%) compared with conventional methods such as Watershed (92.2%) and Otsu (73.1%). The true positive rate of multifractal analysis was 94.3% and the false positive rate minimal level at 1.3%. Detection performance was further tested via human observation. The degree of scattering between manual and automatic counting was remarkably higher with multifractal analysis (R2=0.992) compared with Watershed (R2=0.895) and Otsu (R2=0.353), ensuring overall detection of the small-size pests is most feasible with multifractal analysis in field conditions.
Aznar, Ramón; Barahona, Francisco; Geiss, Otmar; Ponti, Jessica; José Luis, Tadeo; Barrero-Moreno, Josefa
2017-12-01
Single particle-inductively coupled plasma mass spectrometry (SP-ICPMS) is a promising technique able to generate the number based-particle size distribution (PSD) of nanoparticles (NPs) in aqueous suspensions. However, SP-ICPMS analysis is not consolidated as routine-technique yet and is not typically applied to real test samples with unknown composition. This work presents a methodology to detect, quantify and characterise the number-based PSD of Ag-NPs in different environmental aqueous samples (drinking and lake waters), aqueous samples derived from migration tests and consumer products using SP-ICPMS. The procedure is built from a pragmatic view and involves the analysis of serial dilutions of the original sample until no variation in the measured size values is observed while keeping particle counts proportional to the dilution applied. After evaluation of the analytical figures of merit, the SP-ICPMS method exhibited excellent linearity (r 2 >0.999) in the range (1-25) × 10 4 particlesmL -1 for 30, 50 and 80nm nominal size Ag-NPs standards. The precision in terms of repeatability was studied according to the RSDs of the measured size and particle number concentration values and a t-test (p = 95%) at the two intermediate concentration levels was applied to determine the bias of SP-ICPMS size values compared to reference values. The method showed good repeatability and an overall acceptable bias in the studied concentration range. The experimental minimum detectable size for Ag-NPs ranged between 12 and 15nm. Additionally, results derived from direct SP-ICPMS analysis were compared to the results conducted for fractions collected by asymmetric flow-field flow fractionation and supernatant fractions after centrifugal filtration. The method has been successfully applied to determine the presence of Ag-NPs in: lake water; tap water; tap water filtered by a filter jar; seven different liquid silver-based consumer products; and migration solutions (pure water and
Directory of Open Access Journals (Sweden)
John M Lachin
Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to
Ultrasonic detection and sizing of cracks in cast stainless steel samples
International Nuclear Information System (INIS)
Allidi, F.; Edelmann, X.; Phister, O.; Hoegberg, K.; Pers-Anderson, E.B.
1986-01-01
The test consisted of 15 samples of cast stainless steel, each with a weld. Some of the specimens were provided with artificially made thermal fatique cracks. The inspection was performed with the P-scan method. The investigations showed an improvement of recognizability relative to earlier investigations. One probe, the dual type, longitudinal wave 45 degrees, low frequence 0.5-1 MHz gives the best results. (G.B.)
9 CFR 417.6 - Inadequate HACCP Systems.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Inadequate HACCP Systems. 417.6... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.6 Inadequate HACCP Systems. A HACCP system may be found to be inadequate if: (a) The HACCP plan in operation does not meet the requirements set forth in...
International Nuclear Information System (INIS)
Polach, H.; Robertson, S.; Kaihola, L.
1982-01-01
Radiocarbon dating parameters, such as instrumental techniques used, dating precision achieved, sample size, cost and availability of equipment and, in more detail, the merit of small gas proportional counting systems are considered. It is shown that small counters capable of handling 10-100mg of carbon are a viable proposition in terms of achievable precision and in terms of sample turnover, if some 10 mini-counters are operated simultaneously within the same shield. After consideration of the factors affecting the performance of a small gas proportional system it is concluded that an automatic, labour saving, cost effective and efficient carbon dating system, based on some sixteen 10 ml-size counters operating in parallel, could be built using state-of-art knowledge and components
Sex determination by tooth size in a sample of Greek population.
Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C
2014-08-01
Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.
Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi
2017-12-01
Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.
DEFF Research Database (Denmark)
Shetty, Nisha; Min, Tai-Gi; Gislum, René
2011-01-01
The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub...... and radish data. The misclassification rates at optimal sample size were 8%, 6% and 7% for cabbage and 3%, 3% and 2% for radish respectively for random method (averaged for 10 iterations), DUPLEX and CADEX algorithms. This was similar to the misclassification rate of 6% and 2% for cabbage and radish obtained...
Fraley, R. Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159
Fraley, R Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)-the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings.
Directory of Open Access Journals (Sweden)
Finch Stephen J
2005-04-01
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
Tai, Bee-Choo; Grundy, Richard; Machin, David
2011-03-15
To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest. Copyright © 2011 Elsevier Inc. All rights reserved.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes
Directory of Open Access Journals (Sweden)
Danny J. Kelly
2005-01-01
Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai
2015-09-16
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Sampling to estimate population size and detect trends in Tricolored Blackbirds
Meese, Robert; Yee, Julie L.; Holyoak, Marcel
2015-01-01
The Tricolored Blackbird (Agelaius tricolor) is a medium-sized passerine that nests in the largest colonies of any North American landbird since the extinction of the passenger pigeon (Ectopistes migratorius) over 100 years ago (Beedy and Hamilton 1999). The species has a restricted range that occurs almost exclusively within California, with only a few hundred birds scattered in small groups in Oregon, Washington, Nevada, and northwestern Baja California, Mexico (Beedy and Hamilton 1999). Tricolored Blackbirds are itinerant breeders (i.e., breed more than once per year in different locations) and use a wide variety of nesting substrates (Hamilton 1998), many of which are ephemeral. They are also insect dependent during the breeding season, and reproductive success is strongly correlated with relative insect abundance (Meese 2013). Researchers have noted for decades that Tricolored Blackbird’s insect prey are highly variable in space and time; Payne (1969), for example, described the species as a grasshopper follower because they are preferred food items, and high grasshopper abundance is often associated with high reproductive success (Payne 1969, Meese 2013). Thus, the species’ basic reproductive strategy is tied to rather infrequent periods of relatively high insect abundance in some locations followed by much longer periods of range -wide relatively low insect abundance and poor reproductive success. Of course, anthropogenic factors such as habitat loss and insecticide use may be at least partly responsible for these patterns (Hallman et al. 2014, Airola et al. 2014).
Kikuchi, Takashi; Gittins, John
2009-08-15
It is necessary for the calculation of sample size to achieve the best balance between the cost of a clinical trial and the possible benefits from a new treatment. Gittins and Pezeshk developed an innovative (behavioral Bayes) approach, which assumes that the number of users is an increasing function of the difference in performance between the new treatment and the standard treatment. The better a new treatment, the more the number of patients who want to switch to it. The optimal sample size is calculated in this framework. This BeBay approach takes account of three decision-makers, a pharmaceutical company, the health authority and medical advisers. Kikuchi, Pezeshk and Gittins generalized this approach by introducing a logistic benefit function, and by extending to the more usual unpaired case, and with unknown variance. The expected net benefit in this model is based on the efficacy of the new drug but does not take account of the incidence of adverse reactions. The present paper extends the model to include the costs of treating adverse reactions and focuses on societal cost-effectiveness as the criterion for determining sample size. The main application is likely to be to phase III clinical trials, for which the primary outcome is to compare the costs and benefits of a new drug with a standard drug in relation to national health-care. Copyright 2009 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Jamshid Jamali
2017-01-01
Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Dealing with large sample sizes: comparison of a new one spot dot blot method to western blot.
Putra, Sulistyo Emantoko Dwi; Tsuprykov, Oleg; Von Websky, Karoline; Ritter, Teresa; Reichetzeder, Christoph; Hocher, Berthold
2014-01-01
Western blot is the gold standard method to determine individual protein expression levels. However, western blot is technically difficult to perform in large sample sizes because it is a time consuming and labor intensive process. Dot blot is often used instead when dealing with large sample sizes, but the main disadvantage of the existing dot blot techniques, is the absence of signal normalization to a housekeeping protein. In this study we established a one dot two development signals (ODTDS) dot blot method employing two different signal development systems. The first signal from the protein of interest was detected by horseradish peroxidase (HRP). The second signal, detecting the housekeeping protein, was obtained by using alkaline phosphatase (AP). Inter-assay results variations within ODTDS dot blot and western blot and intra-assay variations between both methods were low (1.04-5.71%) as assessed by coefficient of variation. ODTDS dot blot technique can be used instead of western blot when dealing with large sample sizes without a reduction in results accuracy.
Directory of Open Access Journals (Sweden)
V. Indira
2015-03-01
Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.
Inadequate Nutritional Status of Hospitalized Cancer Patients
Directory of Open Access Journals (Sweden)
Ali Alkan
2017-03-01
Full Text Available Objective: In oncology practice, nutrition and also metabolic activity are essential to support the nutritional status and prevent malignant cachexia. It is important to evaluate the patients and plan the maneuvers at the start of the therapy. The primary objective of the study is to define the nutritional status of hospitalized patients and the factors affecting it in order to define the most susceptible patients and maneuvers for better nutritional support. Methods: Patients hospitalized in oncology clinic for therapy were evaluated for food intake and nutritional status through structured interviews. The clinical properties, medical therapies, elements of nutritional support were noted and predictors of inadequate nutritional status (INS were analyzed. Results: Four hundred twenty three patients, between 16-82 years old (median: 52 were evaluated. Nearly half of the patients (185, 43% reported a better appetite at home than in hospital and declared that hospitalization is an important cause of loss of appetite (140/185, 75.6%. Presence of nausea/vomiting (N/V, depression, age less than 65 and use of non-steroidal anti-inflammatory drugs (NSAIDs were associated with increased risk of INS in hospitalized cancer patients. On the contrary, steroid medication showed a positive impact on nutritional status of cancer patients. Conclusion: N/V, younger age, presence of depression and NSAIDs medication were associated with INS in hospitalized cancer patients. Clinicians should pay more attention to this group of patients. In addition, unnecessary hospitalizations and medications that may disturb oral intake must be avoided. Corticosteroids are important tools for managing anorexia and INS.
Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.
2012-01-01
The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.
McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S
2016-10-01
The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not
Mikkelsen, Mark; Loo, Rachelle S; Puts, Nicolaas A J; Edden, Richard A E; Harris, Ashley D
2018-02-21
The relationships between scan duration, signal-to-noise ratio (SNR) and sample size must be considered and understood to design optimal GABA-edited magnetic resonance spectroscopy (MRS) studies. Simulations investigated the effects of signal averaging on SNR, measurement error and group-level variance against a known ground truth. Relative root mean square errors (measurement error) and coefficients of variation (group-level variance) were calculated. GABA-edited data from 18 participants acquired from five voxels were used to examine the relationships between scan duration, SNR and quantitative outcomes in vivo. These relationships were then used to determine the sample sizes required to observe different effect sizes. In both simulated and in vivo data, SNR increased with the square root of the number of averages. Both measurement error and group-level variance were shown to follow an inverse-square-root function, indicating no significant impact of cumulative artifacts. Comparisons between the first two-thirds of the data and the full dataset showed no statistical difference in group-level variance. There was, however, some variability across the five voxels depending on SNR, which impacted the sample sizes needed to detect group differences in specific brain regions. Typical scan durations can be reduced if taking into account a statistically acceptable amount of variance and the magnitudes of predicted effects. While scan duration in GABA-edited MRS has typically been considered in terms of SNR, it is more appropriate to think in terms of the amount of measurement error and group-level variance that provides sufficient statistical power. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Akram, M.; Aftab, F.
2016-01-01
In the present study, fruits (drupes) were collected from Changa Manga Forest Plus Trees (CMF-PT), Changa Manga Forest Teak Stand (CMF-TS) and Punjab University Botanical Gardens (PUBG) and categorized into very large (= 17 mm dia.), large (12-16 mm dia.), medium (9-11 mm dia.) or small (6-8 mm dia.) fruit size grades. Fresh water as well as mechanical scarification and stratification were tested for breaking seed dormancy. Viability status of seeds was estimated by cutting test, X-rays and In vitro seed germination. Out of 2595 fruits from CMF-PT, 500 fruits were of very large grade. This fruit category also had highest individual fruit weight (0.58 g) with more number of 4-seeded fruits (5.29 percent) and fair germination potential (35.32 percent). Generally, most of the fruits were 1-seeded irrespective of size grades and sampling sites. Fresh water scarification had strong effect on germination (44.30 percent) as compared to mechanical scarification and cold stratification after 40 days of sowing. Similarly, sampling sites and fruit size grades also had significant influence on germination. Highest germination (82.33 percent) was obtained on MS (Murashige and Skoog) agar-solidified medium as compared to Woody Plant Medium (WPM) (69.22 percent). Seedlings from all the media were transferred to ex vitro conditions in the greenhouse and achieved highest survival (28.6 percent) from seedlings previously raised on MS agar-solidified medium after 40 days. There was an association between the studied parameters of teak seeds and the sampling sites and fruit size. (author)
Webb, Kristen; Allard, Marc
2010-02-01
Evolutionary and forensic studies commonly choose the mitochondrial control region as the locus for which to evaluate the domestic dog. However, the number of dogs that need to be sampled in order to represent the control region variation present in the worldwide population is yet to be determined. Following the methods of Pereira et al. (2004), we have demonstrated the importance of surveying the complete control region rather than only the popular left domain. We have also evaluated sample saturation in terms of the haplotype number and the number of polymorphisms within the control region. Of the most commonly cited evolutionary research, only a single study has adequately surveyed the domestic dog population, while all forensic studies have failed to meet the minimum values. We recommend that future studies consider dataset size when designing experiments and ideally sample both domains of the control region in an appropriate number of domestic dogs.
Neeson, Thomas M; Van Rijn, Itai; Mandelik, Yael
2013-07-01
Ecologists and paleontologists often rely on higher taxon surrogates instead of complete inventories of biological diversity. Despite their intrinsic appeal, the performance of these surrogates has been markedly inconsistent across empirical studies, to the extent that there is no consensus on appropriate taxonomic resolution (i.e., whether genus- or family-level categories are more appropriate) or their overall usefulness. A framework linking the reliability of higher taxon surrogates to biogeographic setting would allow for the interpretation of previously published work and provide some needed guidance regarding the actual application of these surrogates in biodiversity assessments, conservation planning, and the interpretation of the fossil record. We developed a mathematical model to show how taxonomic diversity, community structure, and sampling effort together affect three measures of higher taxon performance: the correlation between species and higher taxon richness, the relative shapes and asymptotes of species and higher taxon accumulation curves, and the efficiency of higher taxa in a complementarity-based reserve-selection algorithm. In our model, higher taxon surrogates performed well in communities in which a few common species were most abundant, and less well in communities with many equally abundant species. Furthermore, higher taxon surrogates performed well when there was a small mean and variance in the number of species per higher taxa. We also show that empirically measured species-higher-taxon correlations can be partly spurious (i.e., a mathematical artifact), except when the species accumulation curve has reached an asymptote. This particular result is of considerable practical interest given the widespread use of rapid survey methods in biodiversity assessment and the application of higher taxon methods to taxa in which species accumulation curves rarely reach an asymptote, e.g., insects.
Directory of Open Access Journals (Sweden)
Sebastian Wilhelm
2015-12-01
Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.
Vlašić Tanasković, Jelena; Coucke, Wim; Leniček Krleža, Jasna; Vuković Rodriguez, Jadranka
2017-03-01
Laboratory evaluation through external quality assessment (EQA) schemes is often performed as 'peer group' comparison under the assumption that matrix effects influence the comparisons between results of different methods, for analytes where no commutable materials with reference value assignment are available. With EQA schemes that are not large but have many available instruments and reagent options for same analyte, homogenous peer groups must be created with adequate number of results to enable satisfactory statistical evaluation. We proposed a multivariate analysis of variance (MANOVA)-based test to evaluate heterogeneity of peer groups within the Croatian EQA biochemistry scheme and identify groups where further splitting might improve laboratory evaluation. EQA biochemistry results were divided according to instruments used per analyte and the MANOVA test was used to verify statistically significant differences between subgroups. The number of samples was determined by sample size calculation ensuring a power of 90% and allowing the false flagging rate to increase not more than 5%. When statistically significant differences between subgroups were found, clear improvement of laboratory evaluation was assessed before splitting groups. After evaluating 29 peer groups, we found strong evidence for further splitting of six groups. Overall improvement of 6% reported results were observed, with the percentage being as high as 27.4% for one particular method. Defining maximal allowable differences between subgroups based on flagging rate change, followed by sample size planning and MANOVA, identifies heterogeneous peer groups where further splitting improves laboratory evaluation and enables continuous monitoring for peer group heterogeneity within EQA schemes.
Canepari, Silvia; Perrino, Cinzia; Olivieri, Fabio; Astolfi, Maria Luisa
A study of the elemental composition and size distribution of atmospheric particulate matter and of its spatial and temporal variability has been conducted at two traffic sites and one urban background site in the area of Rome, Italy. Chemical analysis included the fractionation of 22 elements (Al, As, Ba, Ca, Cd, Co, Cr, Cu, Fe, Mg, Mn, Na, Ni, Pb, S, Sb, Si, Sn, Sr, Ti, Tl, V) into a water-extractable and a residual fraction. Size distribution analysis included measurements of aerosols in twelve size classes in the range 0.03-10 μm. The simultaneous determination of PM 10 and PM 2.5 at three sites during a 2-week study allowed the necessary evaluation of space and time concentration variations. The application of a chemical fractionation procedure to size-segregated samples proved to be a valuable approach for the characterisation of PM and for discriminating different emission sources. Extractable and residual fractions of the elements showed in fact different size distributions: for almost all elements the extractable fraction was mainly distributed in the fine particle size, while the residual fraction was in general predominant in the coarse size range. For some elements (As, Cd, Sb, Sn, V) the dimensional separation between the extractable fraction, almost quantitatively present in the fine mode particles, and the residual fraction, mainly distributed in the coarse mode particles, was almost quantitative. Under these conditions, the application of the chemical fractionation procedure to PM 10 samples allows a clear distinction between contributes originating from fine and coarse particle emission sources. The results related to PM (10-2.5) and PM 2.5 daily samples confirmed that chemical fractionation analysis increases the selectivity of most elements as source tracers. Extractable and residual fractions of As, Mg, Ni, Pb, S, Sn, Tl, Sb, Cd and V showed different time patterns and different spatial and size distributions, clearly indicating that the two
Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.
2017-10-01
The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Directory of Open Access Journals (Sweden)
Daniel Vasiliu
Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Directory of Open Access Journals (Sweden)
Paillisson J.-M.
2011-05-01
Full Text Available The ecological importance of the red-swamp crayfish (Procambarus clarkii in the functioning of freshwater aquatic ecosystems is becoming more evident. It is important to know the limitations of sampling methods targeting this species, because accurate determination of population characteristics is required for predicting the ecological success of P. clarkii and its potential impacts on invaded ecosystems. In the current study, we addressed the question of trap efficiency by comparing population structure provided by eight trap devices (varying in number and position of entrances, mesh size, trap size and construction materials in three habitats (a pond, a reed bed and a grassland in a French marsh in spring 2010. Based on a large collection of P. clarkii (n = 2091, 272 and 213 respectively in the pond, reed bed and grassland habitats, we found that semi-cylindrical traps made from 5.5 mm mesh galvanized steel wire (SCG were the most efficient in terms of catch probability (96.7–100% compared to 15.7–82.8% depending on trap types and habitats and catch-per-unit effort (CPUE: 15.3, 6.0 and 5.1 crayfish·trap-1·24 h-1 compared to 0.2–4.4, 2.9 and 1.7 crayfish·trap-1·24 h-1 by the other types of fishing gear in the pond, reed bed and grassland respectively. The SCG trap was also the most effective for sampling all size classes, especially small individuals (carapace length \\hbox{$\\leqslant 30$} ⩽ 30 mm. Sex ratio was balanced in all cases. SCG could be considered as appropriate trapping gear to likely give more realistic information about P. clarkii population characteristics than many other trap types. Further investigation is needed to assess the catching effort required for ultimately proposing a standardised sampling method in a large range of habitats.
Inadequate control of world's radioactive sources
International Nuclear Information System (INIS)
2002-01-01
The radioactive materials needed to build a 'dirty bomb' can be found in almost any country in the world, and more than 100 countries may have inadequate control and monitoring programs necessary to prevent or even detect the theft of these materials. The IAEA points out that while radioactive sources number in the millions, only a small percentage have enough strength to cause serious radiological harm. It is these powerful sources that need to be focused on as a priority. In a significant recent development, the IAEA, working in collaboration with the United States Department of Energy (DOE) and the Russian Federation's Ministry for Atomic Energy (MINATOM), have established a tripartite working group on 'Securing and Managing Radioactive Sources'. Through its program to help countries improve their national infrastructures for radiation safety and security, the IAEA has found that more than 100 countries may have no minimum infrastructure in place to properly control radiation sources. However, many IAEA Member States - in Africa, Asia, Latin America, and Europe - are making progress through an IAEA project to strengthen their capabilities to control and regulate radioactive sources. The IAEA is also concerned about the over 50 countries that are not IAEA Member States (there are 134), as they do not benefit from IAEA assistance and are likely to have no regulatory infrastructure. The IAEA has been active in lending its expertise to search out and secure orphaned sources in several countries. More than 70 States have joined with the IAEA to collect and share information on trafficking incidents and other unauthorized movements of radioactive sources and other radioactive materials. The IAEA and its Member States are working hard to raise levels of radiation safety and security, especially focusing on countries known to have urgent needs. The IAEA has taken the leading role in the United Nations system in establishing standards of safety, the most significant of
Directory of Open Access Journals (Sweden)
Sunil Kumar C
2014-01-01
Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.
Inoue, Akiomi; Kawakami, Norito; Tsuchiya, Masao; Sakurai, Keiko; Hashimoto, Hideki
2010-01-01
The purpose of this study was to investigate the cross-sectional association of employment contract, company size, and occupation with psychological distress using a nationally representative sample of the Japanese population. From June through July 2007, a total of 9,461 male and 7,717 female employees living in the community were randomly selected and surveyed using a self-administered questionnaire and interview including questions about occupational class variables, psychological distress (K6 scale), treatment for mental disorders, and other covariates. Among males, part-time workers had a significantly higher prevalence of psychological distress than permanent workers. Among females, temporary/contract workers had a significantly higher prevalence of psychological distress than permanent workers. Among males, those who worked at companies with 300-999 employees had a significantly higher prevalence of psychological distress than those who worked at the smallest companies (with 1-29 employees). Company size was not significantly associated with psychological distress among females. Additionally, occupation was not significantly associated with psychological distress among males or females. Similar patterns were observed when the analyses were conducted for those who had psychological distress and/or received treatment for mental disorders. Working as part-time workers, for males, and as temporary/contract workers, for females, may be associated with poor mental health in Japan. No clear gradient in mental health along company size or occupation was observed in Japan.
Venkatesan, Arjun K; Gan, Wenhui; Ashani, Harsh; Herckes, Pierre; Westerhoff, Paul
2018-04-15
Phosphorus (P) is an important and often limiting element in terrestrial and aquatic ecosystem. A lack of understanding of its distribution and structures in the environment limits the design of effective P mitigation and recovery approaches. Here we developed a robust method employing size exclusion chromatography (SEC) coupled to an ICP-MS to determine the molecular weight (MW) distribution of P in environmental samples. The most abundant fraction of P varied widely in different environmental samples: (i) orthophosphate was the dominant fraction (93-100%) in one lake, two aerosols and DOC isolate samples, (ii) species of 400-600 Da range were abundant (74-100%) in two surface waters, and (iii) species of 150-350 Da range were abundant in wastewater effluents. SEC-DOC of the aqueous samples using a similar SEC column showed overlapping peaks for the 400-600 Da species in two surface waters, and for >20 kDa species in the effluents, suggesting that these fractions are likely associated with organic matter. The MW resolution and performance of SEC-ICP-MS agreed well with the time integrated results obtained using conventional ultrafiltration method. Results show that SEC in combination with ICP-MS and DOC has the potential to be a powerful and easy-to-use method in identifying unknown fractions of P in the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Sorption of water vapour by the Na+-exchanged clay-sized fractions of some tropical soil samples
International Nuclear Information System (INIS)
Yormah, T.B.R.; Hayes, M.H.B.
1993-09-01
Water vapour sorption isotherms at 299K for the Na + -exchanged clay-sized (≤ 2μm e.s.d.) fraction of two sets of samples taken at three different depths from a tropical soil profile have been studied. One set of samples was treated (with H 2 O 2 ) for the removal of much of the organic matter (OM); the other set (of the same samples) was not so treated. The isotherms obtained were all of type II and analyses by the BET method yielded values for the Specific Surface Areas (SSA) and for the average energy of adsorption of the first layer of adsorbate (E a ). OM content and SSA for the untreated samples were found to decrease with depth. Whereas removal of organic matter made negligible difference to the SSA of the top/surface soil, the same treatment produced a significant increase in the SSA of the samples taken from the middle and from the lower depths in the profile; the resulting increase was more pronounced for the subsoil. It has been deduced from these results that OM in the surface soil was less involved with the inorganic soil colloids than that in the subsoil. The increase in surface area which resulted from the removal of OM from the subsoil was most probably due to disaggregation. Values of E a obtained show that for all the samples the adsorption of water vapour became more energetic after the oxidative removal of organic matter; the resulting ΔE a also increased with depth. This suggests that in the dry state, the ''cleaned'' surface of the inorganic soil colloids was more energetic than the ''organic-matter-coater surface''. These data provide strong support for the deduction that OM in the subsoil was in a more ''combined'' state than that in the surface soil. (author). 21 refs, 4 figs, 2 tabs
Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali; Madry, Henning
2013-11-01
Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering.
Inadequate doses of hemodialysis. Predisposing factors, causes and prevention
Directory of Open Access Journals (Sweden)
Pehuén Fernández
2017-04-01
Full Text Available Patients receiving sub-optimal dose of hemodialysis have increased morbidity and mortality. The objectives of this study were to identify predisposing factors and causes of inadequate dialysis, and to design a practical algorithm for the management of these patients. A cross-sectional study was conducted. Ninety patients in chronic hemodialysis at Hospital Privado Universitario de Córdoba were included, during September 2015. Twenty two received sub-optimal dose of hemodialysis. Those with urea distribution volume (V greater than 40 l (72 kg body weight approximately are 11 times more likely (OR = 11.6; CI 95% = 3.2 to 51.7, p < 0.0001 to receive an inadequate dose of hemodialysis, than those with a smaller V. This situation is more frequent in men (OR = 3.5; 95% CI 1.01-15.8; p = 0.0292. V greater than 40 l was the only independent predictor of sub-dialysis in the multivariate analysis (OR = 10.3; 95% CI 2.8-37; p < 0.0004. The main cause of suboptimal dialysis was receiving a lower blood flow (Qb than the prescribed (336.4 ± 45.8 ml/min vs. 402.3 ± 28.8 ml/min respectively, p < 0.0001 (n = 18. Other causes were identified: shorter duration of the session (n = 2, vascular access recirculation (n = 1, and error in the samples (n = 1. In conclusion, the only independent predisposing factor found in this study for sub-optimal dialysis is V greater than 40 l. The main cause was receiving a slower Qb than prescribed. From these findings, an algorithm for the management of these patients was developed
Energy Technology Data Exchange (ETDEWEB)
Chang, Ying-jie [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Shih, Yang-hsin, E-mail: yhs@ntu.edu.tw [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Su, Chiu-Hun [Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310, Taiwan (China); Ho, Han-Chen [Department of Anatomy, Tzu-Chi University, Hualien 970, Taiwan (China)
2017-01-15
Highlights: • Three emerging techniques to detect NPs in the aquatic environment were evaluated. • The pretreatment of centrifugation to decrease the interference was established. • Asymmetric flow field flow fractionation has a low recovery of NPs. • Hydrodynamic chromatography is recommended to be a low-cost screening tool. • Single particle ICPMS is recommended to accurately measure trace NPs in water. - Abstract: Due to the widespread application of engineered nanoparticles, their potential risk to ecosystems and human health is of growing concern. Silver nanoparticles (Ag NPs) are one of the most extensively produced NPs. Thus, this study aims to develop a method to detect Ag NPs in different aquatic systems. In complex media, three emerging techniques are compared, including hydrodynamic chromatography (HDC), asymmetric flow field flow fractionation (AF4) and single particle inductively coupled plasma-mass spectrometry (SP-ICP-MS). The pre-treatment procedure of centrifugation is evaluated. HDC can estimate the Ag NP sizes, which were consistent with the results obtained from DLS. AF4 can also determine the size of Ag NPs but with lower recoveries, which could result from the interactions between Ag NPs and the working membrane. For the SP-ICP-MS, both the particle size and concentrations can be determined with high Ag NP recoveries. The particle size resulting from SP-ICP-MS also corresponded to the transmission electron microscopy observation (p > 0.05). Therefore, HDC and SP-ICP-MS are recommended for environmental analysis of the samples after our established pre-treatment process. The findings of this study propose a preliminary technique to more accurately determine the Ag NPs in aquatic environments and to use this knowledge to evaluate the environmental impact of manufactured NPs.
Directory of Open Access Journals (Sweden)
Fotini Kokou
2016-05-01
Full Text Available One of the main concerns in gene expression studies is the calculation of statistical significance which in most cases remains low due to limited sample size. Increasing biological replicates translates into more effective gains in power which, especially in nutritional experiments, is of great importance as individual variation of growth performance parameters and feed conversion is high. The present study investigates in the gilthead sea bream Sparus aurata, one of the most important Mediterranean aquaculture species. For 24 gilthead sea bream individuals (biological replicates the effects of gradual substitution of fish meal by plant ingredients (0% (control, 25%, 50% and 75% in the diets were studied by looking at expression levels of four immune-and stress-related genes in intestine, head kidney and liver. The present results showed that only the lowest substitution percentage is tolerated and that liver is the most sensitive tissue to detect gene expression variations in relation to fish meal substituted diets. Additionally the usage of three independent biological replicates were evaluated by calculating the averages of all possible triplets in order to assess the suitability of selected genes for stress indication as well as the impact of the experimental set up, thus in the present work the impact of FM substitution. Gene expression was altered depending of the selected biological triplicate. Only for two genes in liver (hsp70 and tgf significant differential expression was assured independently of the triplicates used. These results underlined the importance of choosing the adequate sample number especially when significant, but minor differences in gene expression levels are observed. Keywords: Sample size, Gene expression, Fish meal replacement, Immune response, Gilthead sea bream
Wellek, Stefan
2017-09-10
In clinical trials using lifetime as primary outcome variable, it is more the rule than the exception that even for patients who are failing in the course of the study, survival time does not become known exactly since follow-up takes place according to a restricted schedule with fixed, possibly long intervals between successive visits. In practice, the discreteness of the data obtained under such circumstances is plainly ignored both in data analysis and in sample size planning of survival time studies. As a framework for analyzing the impact of making no difference between continuous and discrete recording of failure times, we use a scenario in which the partially observed times are assigned to the points of the grid of inspection times in the natural way. Evaluating the treatment effect in a two-arm trial fitting into this framework by means of ordinary methods based on Cox's relative risk model is shown to produce biased estimates and/or confidence bounds whose actual coverage exhibits marked discrepancies from the nominal confidence level. Not surprisingly, the amount of these distorting effects turns out to be the larger the coarser the grid of inspection times has been chosen. As a promising approach to correctly analyzing and planning studies generating discretely recorded failure times, we use large-sample likelihood theory for parametric models accommodating the key features of the scenario under consideration. The main result is an easily implementable representation of the expected information and hence of the asymptotic covariance matrix of the maximum likelihood estimators of all parameters contained in such a model. In two real examples of large-scale clinical trials, sample size calculation based on this result is contrasted with the traditional approach, which consists of applying the usual methods for exactly observed failure times. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Garino, Terry J.
2007-09-01
The sintering behavior of Sandia chem-prep high field varistor materials was studied using techniques including in situ shrinkage measurements, optical and scanning electron microscopy and x-ray diffraction. A thorough literature review of phase behavior, sintering and microstructure in Bi{sub 2}O{sub 3}-ZnO varistor systems is included. The effects of Bi{sub 2}O{sub 3} content (from 0.25 to 0.56 mol%) and of sodium doping level (0 to 600 ppm) on the isothermal densification kinetics was determined between 650 and 825 C. At {ge} 750 C samples with {ge}0.41 mol% Bi{sub 2}O{sub 3} have very similar densification kinetics, whereas samples with {le}0.33 mol% begin to densify only after a period of hours at low temperatures. The effect of the sodium content was greatest at {approx}700 C for standard 0.56 mol% Bi{sub 2}O{sub 3} and was greater in samples with 0.30 mol% Bi{sub 2}O{sub 3} than for those with 0.56 mol%. Sintering experiments on samples of differing size and shape found that densification decreases and mass loss increases with increasing surface area to volume ratio. However, these two effects have different causes: the enhancement in densification as samples increase in size appears to be caused by a low oxygen internal atmosphere that develops whereas the mass loss is due to the evaporation of bismuth oxide. In situ XRD experiments showed that the bismuth is initially present as an oxycarbonate that transforms to metastable {beta}-Bi{sub 2}O{sub 3} by 400 C. At {approx}650 C, coincident with the onset of densification, the cubic binary phase, Bi{sub 38}ZnO{sub 58} forms and remains stable to >800 C, indicating that a eutectic liquid does not form during normal varistor sintering ({approx}730 C). Finally, the formation and morphology of bismuth oxide phase regions that form on the varistors surfaces during slow cooling were studied.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Heo, Moonseong; Kim, Namhee; Rinke, Michael L; Wylie-Rosett, Judith
2018-02-01
Stepped-wedge (SW) designs have been steadily implemented in a variety of trials. A SW design typically assumes a three-level hierarchical data structure where participants are nested within times or periods which are in turn nested within clusters. Therefore, statistical models for analysis of SW trial data need to consider two correlations, the first and second level correlations. Existing power functions and sample size determination formulas had been derived based on statistical models for two-level data structures. Consequently, the second-level correlation has not been incorporated in conventional power analyses. In this paper, we derived a closed-form explicit power function based on a statistical model for three-level continuous outcome data. The power function is based on a pooled overall estimate of stratified cluster-specific estimates of an intervention effect. The sampling distribution of the pooled estimate is derived by applying a fixed-effect meta-analytic approach. Simulation studies verified that the derived power function is unbiased and can be applicable to varying number of participants per period per cluster. In addition, when data structures are assumed to have two levels, we compare three types of power functions by conducting additional simulation studies under a two-level statistical model. In this case, the power function based on a sampling distribution of a marginal, as opposed to pooled, estimate of the intervention effect performed the best. Extensions of power functions to binary outcomes are also suggested.
Sevelius, Jae M.
2017-01-01
Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask
Chhikara, R. S.; Odell, P. L.
1973-01-01
A multichannel scanning device may fail to observe objects because of obstructions blocking the view, or different categories of objects may make up a resolution element giving rise to a single observation. Ground truth will be required on any such categories of objects in order to estimate their expected proportions associated with various classes represented in the remote sensing data. Considering the classes to be distributed as multivariate normal with different mean vectors and common covariance, maximum likelihood estimates are given for the expected proportions of objects associated with different classes, using the Bayes procedure for classification of individuals obtained from these classes. An approximate solution for simultaneous confidence intervals on these proportions is given, and thereby a sample-size needed to achieve a desired amount of accuracy for the estimates is determined.
Li, Aifeng; Ma, Feifei; Song, Xiuli; Yu, Rencheng
2011-03-18
Solid-phase adsorption toxin tracking (SPATT) technology was developed as an effective passive sampling method for dissolved diarrhetic shellfish poisoning (DSP) toxins in seawater. HP20 and SP700 resins have been reported as preferred adsorption substrates for lipophilic algal toxins and are recommended for use in SPATT testing. However, information on the mechanism of passive adsorption by these polymeric resins is still limited. Described herein is a study on the adsorption of OA and DTX1 toxins extracted from Prorocentrum lima algae by HP20 and SP700 resins. The pore size distribution of the adsorbents was characterized by a nitrogen adsorption method to determine the relationship between adsorption and resin porosity. The Freundlich equation constant showed that the difference in adsorption capacity for OA and DTX1 toxins was not determined by specific surface area, but by the pore size distribution in particular, with micropores playing an especially important role. Additionally, it was found that differences in affinity between OA and DTX1 for aromatic resins were as a result of polarity discrepancies due to DTX1 having an additional methyl moiety. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Woo, Hyun-Kyung; Sunkara, Vijaya; Park, Juhee; Kim, Tae-Hyeong; Han, Ja-Ryoung; Kim, Chi-Ju; Choi, Hyun-Il; Kim, Yoon-Keun; Cho, Yoon-Kyoung
2017-02-28
Extracellular vesicles (EVs) are cell-derived, nanoscale vesicles that carry nucleic acids and proteins from their cells of origin and show great potential as biomarkers for many diseases, including cancer. Efficient isolation and detection methods are prerequisites for exploiting their use in clinical settings and understanding their physiological functions. Here, we presented a rapid, label-free, and highly sensitive method for EV isolation and quantification using a lab-on-a-disc integrated with two nanofilters (Exodisc). Starting from raw biological samples, such as cell-culture supernatant (CCS) or cancer-patient urine, fully automated enrichment of EVs in the size range of 20-600 nm was achieved within 30 min using a tabletop-sized centrifugal microfluidic system. Quantitative tests using nanoparticle-tracking analysis confirmed that the Exodisc enabled >95% recovery of EVs from CCS. Additionally, analysis of mRNA retrieved from EVs revealed that the Exodisc provided >100-fold higher concentration of mRNA as compared with the gold-standard ultracentrifugation method. Furthermore, on-disc enzyme-linked immunosorbent assay using urinary EVs isolated from bladder cancer patients showed high levels of CD9 and CD81 expression, suggesting that this method may be potentially useful in clinical settings to test urinary EV-based biomarkers for cancer diagnostics.
Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.
2013-01-01
Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970
Directory of Open Access Journals (Sweden)
Valéria Schimitz Marodim
2000-10-01
Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.
Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G
2017-06-01
Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.
Carr, Greg J; Bailer, A John; Rawlings, Jane M; Belanger, Scott E
2018-01-19
The fish acute toxicity test method is foundational to aquatic toxicity testing strategies, yet the literature lacks a concise sample size assessment. While various sources address sample size, historical precedent seems to play a larger role than objective measures. Here, a novel and comprehensive quantification of the effect of sample size on estimation of the LC 50 is presented, covering a wide range of scenarios. The results put into perspective the practical differences across a range of sample sizes, from N = 5/concentration up to N = 23/concentration. This work provides a framework for setting sample size guidance. It illustrates ways to quantify the performance of LC 50 estimation, which can be used to set sample size guidance given reasonably difficult, or worst-case scenarios. There is a clear benefit to larger sample size studies: they reduce error in the determination of LC 50 s, and lead to more robust safe environmental concentration determinations, particularly in cases likely to be called worst-case (shallow slope and true LC 50 near the edges of the concentration range). Given that the use of well-justified sample sizes is crucial to reducing uncertainty in toxicity estimates, these results lead us to recommend a reconsideration of the current de minimis 7/concentration sample size for critical studies (e.g., studies needed for a chemical registration, which are being tested for the first time, or involving difficult test substances). This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Test results of the first 50 kA NbTi full size sample for ITER
International Nuclear Information System (INIS)
Ciazynski, D.; Zani, L.; Huber, S.; Stepanov, B.; Karlemo, B.
2003-01-01
Within the framework of the research studies for the International Thermonuclear Experimental Reactor (ITER) project, the first full size NbTi conductor sample was fabricated in industry and tested in the SULTAN facility (Villigen, Switzerland). This sample (PF-FSJS), which is relevant to the Poloidal Field coils of ITER, is composed of two parallel straight bars of conductor, connected at bottom through a joint designed according to the Cea twin-box concept. The two conductor legs are identical except for the use of different strands: a nickel plated NbTi strand with a pure copper matrix in one leg, and a bare NbTi strand with copper matrix and internal CuNi barrier in the other leg. The two conductors and the joint were extensively tested regarding DC (direct current) and AC (alternative current) properties. This paper reports on the tests results and analysis, stressing the differences between the two conductor legs and discussing the impact of the test results on the ITER design criteria for conductor and joint. While joint DC resistance, conductors and joint AC losses, fulfilled the ITER requirements, neither conductor could reach its current sharing temperature at relevant ITER currents, due to instabilities. Although the drop in temperature is slight for the CuNi strand cable, it is more significant for the Ni plated strand cable. (authors)
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
Bowden, J; Mander, A
2014-01-01
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3:277-287) for two-stage trials with mid-trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re-estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.
The Link Between Inadequate Sleep and Obesity in Young Adults.
Vargas, Perla A
2016-03-01
The prevalence of obesity has increased dramatically over the past decade. Although an imbalance between caloric intake and physical activity is considered a key factor responsible for the increase, there is emerging evidence suggesting that other factors may be important contributors to weight gain, including inadequate sleep. Overall research evidence suggests that inadequate sleep is associated with obesity. Importantly, the strength and trajectory of the association seem to be influenced by multiple factors including age. Although limited, the emerging evidence suggests young adults might be at the center of a "perfect health storm," exposing them to the highest risk for obesity and inadequate sleep. Unfortunately, the methods necessary for elucidating the complex relationship between sleep and obesity are lacking. Uncovering the underlying factors and trajectories between inadequate sleep and weight gain in different populations may help to identify the windows of susceptibility and to design targeted interventions to prevent the negative impact of obesity and related diseases.
Directory of Open Access Journals (Sweden)
H Mohamadi Monavar
2017-10-01
Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of
Kristin Bunte; Steven R. Abt
2001-01-01
This document provides guidance for sampling surface and subsurface sediment from wadable gravel-and cobble-bed streams. After a short introduction to streams types and classifications in gravel-bed rivers, the document explains the field and laboratory measurement of particle sizes and the statistical analysis of particle-size distributions. Analysis of particle...
Klaver, M.; Smeets, R.J.; Koornneef, J.M.; Davies, G.R.; Vroon, P.Z.
2016-01-01
The use of the double spike technique to correct for instrumental mass fractionation has yielded high precision results for lead isotope measurements by thermal ionisation mass spectrometry (TIMS), but the applicability to ng size Pb samples is hampered by the small size of the
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Atterton, Thomas; De Groote, Isabelle; Eliopoulos, Constantine
2016-10-01
The construction of the biological profile from human skeletal remains is the foundation of anthropological examination. However, remains may be fragmentary and the elements usually employed, such as the pelvis and skull, are not available. The clavicle has been successfully used for sex estimation in samples from Iran and Greece. In the present study, the aim was to test the suitability of the measurements used in those previous studies on a British Medieval population. In addition, the project tested whether discrimination between sexes was due to size or clavicular strength. The sample consisted of 23 females and 25 males of pre-determined sex from two medieval collections: Poulton and Gloucester. Six measurements were taken using an osteometric board, sliding calipers and graduated tape. In addition, putty rings and bi-planar radiographs were made and robusticity measures calculated. The resulting variables were used in stepwise discriminant analyses. The linear measurements allowed correct sex classification in 89.6% of all individuals. This demonstrates the applicability of the clavicle for sex estimation in British populations. The most powerful discriminant factor was maximum clavicular length and the best combination of factors was maximum clavicular length and circumference. This result is similar to that obtained by other studies. To further investigate the extent of sexual dimorphism of the clavicle, the biomechanical properties of the polar second moment of area J and the ratio of maximum to minimum bending rigidity are included in the analysis. These were found to have little influence when entered into the discriminant function analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Directory of Open Access Journals (Sweden)
Ming-Yen Tsai
Full Text Available OBJECTIVES: The Meridian Energy Analysis Device is currently a popular tool in the scientific research of meridian electrophysiology. In this field, it is generally believed that measuring the electrical conductivity of meridians provides information about the balance of bioenergy or Qi-blood in the body. METHODS AND RESULTS: PubMed database based on some original articles from 1956 to 2014 and the authoŕs clinical experience. In this short communication, we provide clinical examples of Meridian Energy Analysis Device application, especially in the field of traditional Chinese medicine, discuss the reliability of the measurements, and put the values obtained into context by considering items of considerable variability and by estimating sample size. CONCLUSION: The Meridian Energy Analysis Device is making a valuable contribution to the diagnosis of Qi-blood dysfunction. It can be assessed from short-term and long-term meridian bioenergy recordings. It is one of the few methods that allow outpatient traditional Chinese medicine diagnosis, monitoring the progress, therapeutic effect and evaluation of patient prognosis. The holistic approaches underlying the practice of traditional Chinese medicine and new trends in modern medicine toward the use of objective instruments require in-depth knowledge of the mechanisms of meridian energy, and the Meridian Energy Analysis Device can feasibly be used for understanding and interpreting traditional Chinese medicine theory, especially in view of its expansion in Western countries.
Directory of Open Access Journals (Sweden)
A. Martín Andrés
2015-01-01
Full Text Available The Mantel-Haenszel test is the most frequent asymptotic test used for analyzing stratified 2 × 2 tables. Its exact alternative is the test of Birch, which has recently been reconsidered by Jung. Both tests have a conditional origin: Pearson’s chi-squared test and Fisher’s exact test, respectively. But both tests have the same drawback that the result of global test (the stratified test may not be compatible with the result of individual tests (the test for each stratum. In this paper, we propose to carry out the global test using a multiple comparisons method (MC method which does not have this disadvantage. By refining the method (MCB method an alternative to the Mantel-Haenszel and Birch tests may be obtained. The new MC and MCB methods have the advantage that they may be applied from an unconditional view, a methodology which until now has not been applied to this problem. We also propose some sample size calculation methods.
Willruth, Arne Michael; Steinhard, Johannes; Enzensberger, Christian; Axt-Fliedner, Roland; Gembruch, Ulrich; Doelle, Astrid; Dimitriou, Ioanna; Fimmers, Rolf; Bahlmann, Franz
2018-02-01
To assess the time intervals of the cardiac cycle in healthy fetuses in the second and third trimester using color tissue Doppler imaging (cTDI) and to evaluate the influence of different sizes of sample gates on time interval values. Time intervals were measured from the cTDI-derived Doppler waveform using a small and large region of interest (ROI) in healthy fetuses. 40 fetuses were included. The median gestational age at examination was 26 + 1 (range: 20 + 5 - 34 + 5) weeks. The median frame rate was 116/s (100 - 161/s) and the median heart rate 143 (range: 125 - 158) beats per minute (bpm). Using small and large ROIs, the second trimester right ventricular (RV) mean isovolumetric contraction times (ICTs) were 39.8 and 41.4 ms (p = 0.17), the mean ejection times (ETs) were 170.2 and 164.6 ms (p ICTs were 36.2 and 39.4 ms (p = 0.05), the mean ETs were 167.4 and 164.5 ms (p = 0.013), the mean IRTs were 53.9 and 57.1 ms (p = 0.05), respectively. The third trimester RV mean ICTs were 50.7 and 50.4 ms (p = 0.75), the mean ETs were 172.3 and 181.4 ms (p = 0.49), the mean IRTs were 50.2 and 54.6 ms (p = 0.03); the LV mean ICTs were 45.1 and 46.2 ms (p = 0.35), the mean ETs were 175.2 vs. 172.9 ms (p = 0.29), the mean IRTs were 47.1 and 50.0 ms (p = 0.01), respectively. Isovolumetric time intervals can be analyzed precisely and relatively independent of ROI size. In the near future, automatic time interval measurement using ultrasound systems will be feasible and the analysis of fetal myocardial function can become part of the clinical routine. © Georg Thieme Verlag KG Stuttgart · New York.
Analysis of inadequate cervical smears using Shewhart control charts
Directory of Open Access Journals (Sweden)
Wall Michael K
2004-06-01
Full Text Available Abstract Background Inadequate cervical smears cannot be analysed, can cause distress to women, are a financial burden to the NHS and may lead to further unnecessary procedures being undertaken. Furthermore, the proportion of inadequate smears is known to vary widely amongst providers. This study investigates this variation using Shewhart's theory of variation and control charts, and suggests strategies for addressing this. Methods Cervical cytology data, from six laboratories, serving 100 general practices in a former UK Health Authority area were obtained for the years 2000 and 2001. Control charts of the proportion of inadequate smears were plotted for all general practices, for the six laboratories and for the practices stratified by laboratory. The relationship between proportion of inadequate smears and the proportion of negative, borderline, mild, moderate or severe dyskaryosis as well as the positive predictive value of a smear in each laboratory was also investigated. Results There was wide variation in the proportion of inadequate smears with 23% of practices showing evidence of special cause variation and four of the six laboratories showing evidence of special cause variation. There was no evidence of a clinically important association between high rates of inadequate smears and better detection of dyskaryosis (R2 = 0.082. Conclusions The proportion of inadequate smears is influenced by two distinct sources of variation – general practices and cytology laboratories, which are classified by the control chart methodology as either being consistent with common or special cause variation. This guidance from the control chart methodology appears to be useful in delivering the aim of continual improvement.
George, Goldy C.; Hoelscher, Deanna M.; Nicklas, Theresa A.; Kelder, Steven H.
2009-01-01
Objective: To examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. Design: Cross-sectional data from the School Physical Activity and Nutrition study, a probability-based sample of schoolchildren. Children completed a questionnaire that assessed…
Neumann, Christoph; Taub, Margaret A.; Younkin, Samuel G.; Beaty, Terri H.; Ruczinski, Ingo; Schwender, Holger
2014-01-01
Case-parent trio studies considering genotype data from children affected by a disease and from their parents are frequently used to detect single nucleotide polymorphisms (SNPs) associated with disease. The most popular statistical tests in this study design are transmission/disequlibrium tests (TDTs). Several types of these tests have been developed, e.g., procedures based on alleles or genotypes. Therefore, it is of great interest to examine which of these tests have the highest statistical power to detect SNPs associated with disease. Comparisons of the allelic and the genotypic TDT for individual SNPs have so far been conducted based on simulation studies, since the test statistic of the genotypic TDT was determined numerically. Recently, it, however, has been shown that this test statistic can be presented in closed form. In this article, we employ this analytic solution to derive equations for calculating the statistical power and the required sample size for different types of the genotypic TDT. The power of this test is then compared with the one of the corresponding score test assuming the same mode of inheritance as well as the allelic TDT based on a multiplicative mode of inheritance, which is equivalent to the score test assuming an additive mode of inheritance. This is, thus, the first time that the power of these tests are compared based on equations, yielding instant results and omitting the need for time-consuming simulation studies. This comparison reveals that the tests have almost the same power, with the score test being slightly more powerful. PMID:25123830
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Syed, Mushabbar A; Oshinski, John N; Kitchen, Charles; Ali, Arshad; Charnigo, Richard J; Quyyumi, Arshed A
2009-08-01
Carotid MRI measurements are increasingly being employed in research studies for atherosclerosis imaging. The majority of carotid imaging studies use 1.5 T MRI. Our objective was to investigate intra-observer and inter-observer variability in carotid measurements using high resolution 3 T MRI. We performed 3 T carotid MRI on 10 patients (age 56 +/- 8 years, 7 male) with atherosclerosis risk factors and ultrasound intima-media thickness > or =0.6 mm. A total of 20 transverse images of both right and left carotid arteries were acquired using T2 weighted black-blood sequence. The lumen and outer wall of the common carotid and internal carotid arteries were manually traced; vessel wall area, vessel wall volume, and average wall thickness measurements were then assessed for intra-observer and inter-observer variability. Pearson and intraclass correlations were used in these assessments, along with Bland-Altman plots. For inter-observer variability, Pearson correlations ranged from 0.936 to 0.996 and intraclass correlations from 0.927 to 0.991. For intra-observer variability, Pearson correlations ranged from 0.934 to 0.954 and intraclass correlations from 0.831 to 0.948. Calculations showed that inter-observer variability and other sources of error would inflate sample size requirements for a clinical trial by no more than 7.9%, indicating that 3 T MRI is nearly optimal in this respect. In patients with subclinical atherosclerosis, 3 T carotid MRI measurements are highly reproducible and have important implications for clinical trial design.
Directory of Open Access Journals (Sweden)
M.S. El Tahawy
2014-03-01
Full Text Available In this work, a new semi- absolute non-destructive assay technique has been developed to verify the mass content of 235U in the large sizes nuclear material samples of different enrichment through combination of experimental measurements and Mont Carlo calculations (version MCNP5. A good agreement was found between the calculated and declared values of the mass content of 235U of uranium oxide (UO2 samples. The results obtained from Mont Carlo calculations showed that the semi-absolute technique can be used with sufficient reliability to verify the uranium mass content in the large sizes nuclear material samples of different enrichment.
Directory of Open Access Journals (Sweden)
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
Beshara, Monica; Hutchinson, Amanda D; Wilson, Carlene
2013-08-01
Serving size is a modifiable determinant of energy consumption, and an important factor to address in the prevention and treatment of obesity. The present study tested an hypothesised negative association between individuals' everyday mindfulness and self-reported serving size of energy dense foods. The mediating role of mindful eating was also explored. A community sample of 171 South Australian adults completed self-report measures of everyday mindfulness and mindful eating. The dependent measure was participants' self-reported average serving size of energy dense foods consumed in the preceding week. Participants who reported higher levels of everyday mindfulness were more mindful eaters (r=0.41, pMindful eating fully mediated the negative association between everyday mindfulness and serving size. The domains of mindful eating most relevant to serving size included emotional and disinhibited eating. Results suggest that mindful eating may have a greater influence on serving size than daily mindfulness. Copyright © 2013 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Haugbøl, Steven; Pinborg, Lars H; Arfan, Haroon M
2006-01-01
PURPOSE: To determine the reproducibility of measurements of brain 5-HT2A receptors with an [18F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects......% (range 5-12%), whereas in regions with a low receptor density, BP1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high...... receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. CONCLUSION: This study demonstrates that [18F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT2A receptor...
Directory of Open Access Journals (Sweden)
Rocío Joo
2017-04-01
Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.
Do infants with cow's milk protein allergy have inadequate levels of vitamin D?
Silva, Cristiane M; Silva, Silvia A da; Antunes, Margarida M de C; Silva, Gisélia Alves Pontes da; Sarinho, Emanuel Sávio Cavalcanti; Brandt, Katia G
To verify whether infants with cow's milk protein allergy have inadequate vitamin D levels. This cross-sectional study included 120 children aged 2 years or younger, one group with cow's milk protein allergy and a control group. The children were recruited at the pediatric gastroenterology, allergology, and pediatric outpatient clinics of a university hospital in the Northeast of Brazil. A questionnaire was administered to the caregiver and blood samples were collected for vitamin D quantification. Vitamin D levels <30ng/mL were considered inadequate. Vitamin D level was expressed as mean and standard deviation, and the frequency of the degrees of sufficiency and other variables, as proportions. Infants with cow's milk protein allergy had lower mean vitamin D levels (30.93 vs.35.29ng/mL; p=0.041) and higher deficiency frequency (20.3% vs.8.2; p=0.049) than the healthy controls. Exclusively or predominantly breastfed infants with cow's milk protein allergy had higher frequency of inadequate vitamin D levels (p=0.002). Regardless of sun exposure time, the groups had similar frequencies of inadequate vitamin D levels (p=0.972). Lower vitamin D levels were found in infants with CMPA, especially those who were exclusively or predominantly breastfed, making these infants a possible risk group for vitamin D deficiency. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
The impact of inadequate wastewater treatment on the receiving ...
African Journals Online (AJOL)
The impact of inadequate wastewater treatment on the receiving water bodies – Case study: Buffalo City and Nkokonbe Municipalities of the Eastern Cape ... into their respective receiving water bodies (Tembisa Dam, the Nahoon and Eastern Beach which are part of the Indian Ocean; the Tyume River and the Kat River).
The impact of inadequate wastewater treatment on the receiving ...
African Journals Online (AJOL)
7950 = Water SA (on-line). 687. The impact of inadequate wastewater treatment on the receiving water bodies – Case study: Buffalo City and. Nkokonbe Municipalities of the Eastern Cape Province. MNB Momba1*, AN Osode2 and M Sibewu1.
Inadequate cerebral oxygen delivery and central fatigue during strenuous exercise
DEFF Research Database (Denmark)
Nybo, Lars; Rasmussen, Peter
2007-01-01
Under resting conditions, the brain is protected against hypoxia because cerebral blood flow increases when the arterial oxygen tension becomes low. However, during strenuous exercise, hyperventilation lowers the arterial carbon dioxide tension and blunts the increase in cerebral blood flow, which...... can lead to an inadequate oxygen delivery to the brain and contribute to the development of fatigue....
Inadequate Information in Laboratory Test Requisition in a Tertiary ...
African Journals Online (AJOL)
Aim: Laboratory investigations are important aspect of patient management and inadequate information or errors arising from the process of filling out laboratory Request Forms can impact significantly on the quality of laboratory result and ultimately on patient care. Objectives: This study examined the pattern of deficiencies ...
Energy Technology Data Exchange (ETDEWEB)
Wang, Lei; Li, Zhenyu; Jiang, Jia; An, Taiyu; Qin, Hongwei; Hu, Jifan, E-mail: hujf@sdu.edu.cn
2017-01-01
In the present work, we demonstrate that ferromagnetic resonance and magneto-permittivity resonance can be observed in appropriate microwave frequencies at room temperature for multiferroic nano-BiFeO{sub 3}/paraffin composite sample with an appropriate sample-thickness (such as 2 mm). Ferromagnetic resonance originates from the room-temperature weak ferromagnetism of nano-BiFeO{sub 3}. The observed magneto-permittivity resonance in multiferroic nano-BiFeO{sub 3} is connected with the dynamic magnetoelectric coupling through Dzyaloshinskii–Moriya (DM) magnetoelectric interaction or the combination of magnetostriction and piezoelectric effects. In addition, we experimentally observed the resonance of negative imaginary permeability for nano BiFeO{sub 3}/paraffin toroidal samples with longer sample thicknesses D=3.7 and 4.9 mm. Such resonance of negative imaginary permeability belongs to sample-size resonance. - Highlights: • Nano-BiFeO{sub 3}/paraffin composite shows a ferromagnetic resonance. • Nano-BiFeO{sub 3}/paraffin composite shows a magneto-permittivity resonance. • Resonance of negative imaginary permeability in BiFeO{sub 3} is a sample-size resonance. • Nano-BiFeO{sub 3}/paraffin composite with large thickness shows a sample-size resonance.
International Nuclear Information System (INIS)
Wang, Lei; Li, Zhenyu; Jiang, Jia; An, Taiyu; Qin, Hongwei; Hu, Jifan
2017-01-01
In the present work, we demonstrate that ferromagnetic resonance and magneto-permittivity resonance can be observed in appropriate microwave frequencies at room temperature for multiferroic nano-BiFeO 3 /paraffin composite sample with an appropriate sample-thickness (such as 2 mm). Ferromagnetic resonance originates from the room-temperature weak ferromagnetism of nano-BiFeO 3 . The observed magneto-permittivity resonance in multiferroic nano-BiFeO 3 is connected with the dynamic magnetoelectric coupling through Dzyaloshinskii–Moriya (DM) magnetoelectric interaction or the combination of magnetostriction and piezoelectric effects. In addition, we experimentally observed the resonance of negative imaginary permeability for nano BiFeO 3 /paraffin toroidal samples with longer sample thicknesses D=3.7 and 4.9 mm. Such resonance of negative imaginary permeability belongs to sample-size resonance. - Highlights: • Nano-BiFeO 3 /paraffin composite shows a ferromagnetic resonance. • Nano-BiFeO 3 /paraffin composite shows a magneto-permittivity resonance. • Resonance of negative imaginary permeability in BiFeO 3 is a sample-size resonance. • Nano-BiFeO 3 /paraffin composite with large thickness shows a sample-size resonance.
Inadequate sleep and muscle strength: Implications for resistance training.
Knowles, Olivia E; Drinkwater, Eric J; Urwin, Charles S; Lamon, Séverine; Aisbett, Brad
2018-02-02
Inadequate sleep (e.g., an insufficient duration of sleep per night) can reduce physical performance and has been linked to adverse metabolic health outcomes. Resistance exercise is an effective means to maintain and improve physical capacity and metabolic health, however, the outcomes for populations who may perform resistance exercise during periods of inadequate sleep are unknown. The primary aim of this systematic review was to evaluate the effect of sleep deprivation (i.e. no sleep) and sleep restriction (i.e. a reduced sleep duration) on resistance exercise performance. A secondary aim was to explore the effects on hormonal indicators or markers of muscle protein metabolism. A systematic search of five electronic databases was conducted with terms related to three combined concepts: inadequate sleep; resistance exercise; performance and physiological outcomes. Study quality and biases were assessed using the Effective Public Health Practice Project quality assessment tool. Seventeen studies met the inclusion criteria and were rated as 'moderate' or 'weak' for global quality. Sleep deprivation had little effect on muscle strength during resistance exercise. In contrast, consecutive nights of sleep restriction could reduce the force output of multi-joint, but not single-joint movements. Results were conflicting regarding hormonal responses to resistance training. Inadequate sleep impairs maximal muscle strength in compound movements when performed without specific interventions designed to increase motivation. Strategies to assist groups facing inadequate sleep to effectively perform resistance training may include supplementing their motivation by training in groups or ingesting caffeine; or training prior to prolonged periods of wakefulness. Copyright © 2018. Published by Elsevier Ltd.
DEFF Research Database (Denmark)
Aukland, S M; Westerhausen, R; Plessen, K J
2011-01-01
BACKGROUND AND PURPOSE: Several studies suggest that VLBW is associated with a reduced CC size later in life. We aimed to clarify this in a prospective, controlled study of 19-year-olds, hypothesizing that those with LBWs had smaller subregions of CC than the age-matched controls, even after...... correcting for brain volume. MATERIALS AND METHODS: One hundred thirteen survivors of LBW (BW brain. The cross-sectional area of the CC (total callosal area, and the callosal subregions of the genu, truncus......, and posterior third) was measured. Callosal areas were adjusted for head size. RESULTS: The posterior third subregion of the CC was significantly smaller in individuals born with a LBW compared with controls, even after adjusting for size of the forebrain. Individuals who were born with a LBW had a smaller CC...
Baalousha, M; Lead, J R
2012-06-05
This study aims to rationalize the variability in the measured size of nanomaterials (NMs) by some of the most commonly applied techniques in the field of nano(eco)toxicology and environmental sciences, including atomic force microscopy (AFM), dynamic light scattering (DLS), and flow field-flow fractionation (FlFFF). A validated sample preparation procedure for size evaluation by AFM is presented, along with a quantitative explanation of the variability of measured sizes by FlFFF, AFM, and DLS. The ratio of the z-average hydrodynamic diameter (d(DLS)) by DLS and the particle height by AFM (d(AFM)) approaches 1.0 for monodisperse samples and increases with sample polydispersity. A polydispersity index of 0.1 is suggested as a suitable limit above which DLS data can no longer be interpreted accurately. Conversion of the volume particle size distribution (PSD) by FlFFF-UV to the number PSD reduces the differences observed between the sizes measured by FlFFF (d(FlFFF)) and AFM. The remaining differences in the measured sizes can be attributed to particle structure (sphericity and permeability). The ratio d(FlFFF)/d(AFM) approaches 1 for small ion-coated NMs, which can be described as hard spheres, whereas d(FlFFF)/d(AFM) deviates from 1 for polymer-coated NMs, indicating that these particles are permeable, nonspherical, or both. These findings improve our understanding of the rather scattered data on NM size measurements reported in the environmental and nano(eco)toxicology literature and provide a tool for comparison of the measured sizes by different techniques.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
is 2- to 15-fold more efficient than the common systematic, uniformly random sampling. The simulations also indicate that the lack of a simple predictor of the coefficient of error (CE) due to field-to-field variation is a more severe problem for uniform sampling strategies than anticipated. Because...... of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to be automatically estimated (not just predicted), unbiased - for all estimators and at no extra cost to the user....
DEFF Research Database (Denmark)
Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin
2017-01-01
Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM......) spectroscopy in full spectral (60 emission and 34 excitation wavelengths) and chromatographic resolution (... distributions for individual fluorescence components obtained from independent data sets. Spectra extracted from allochthonous DOM were highly similar. Allochthonous and autochthonous DOM shared some spectra, but included unique components. In agreement with the supramolecular assembly hypothesis, molecular...
During the regeneration of cross-pollinating accessions, genetic contamination from foreign pollen and reduction of the effective population size can be a hindrance to maintaining the genetic diversity in the temperate grass collection at the Western Regional Plant Introduction Station (WRPIS). The...
Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael
2016-01-01
Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…
Directory of Open Access Journals (Sweden)
Robert D. Otto
2003-04-01
Full Text Available Wildlife radio-telemetry and tracking projects often determine a priori required sample sizes by statistical means or default to the maximum number that can be maintained within a limited budget. After initiation of such projects, little attention is focussed on effective sample size requirements, resulting in lack of statistical power. The Department of National Defence operates a base in Labrador, Canada for low level jet fighter training activities, and maintain a sample of satellite collars on the George River caribou (Rangifer tarandus caribou herd of the region for spatial avoidance mitiga¬tion purposes. We analysed existing location data, in conjunction with knowledge of life history, to develop estimates of satellite collar sample sizes required to ensure adequate mitigation of GRCH. We chose three levels of probability in each of six annual caribou seasons. Estimated number of collars required ranged from 15 to 52, 23 to 68, and 36 to 184 for 50%, 75%, and 90% probability levels, respectively, depending on season. Estimates can be used to make more informed decisions about mitigation of GRCH, and, generally, our approach provides a means to adaptively assess radio collar sam¬ple sizes for ongoing studies.
Particle size distributions (PSD) have long been used to more accurately estimate the PM10 fraction of total particulate matter (PM) stack samples taken from agricultural sources. These PSD analyses were typically conducted using a Coulter Counter with 50 micrometer aperture tube. With recent increa...
DEFF Research Database (Denmark)
Thorlund, Kristian; Anema, Aranka; Mills, Edward
2010-01-01
To illustrate the utility of statistical monitoring boundaries in meta-analysis, and provide a framework in which meta-analysis can be interpreted according to the adequacy of sample size. To propose a simple method for determining how many patients need to be randomized in a future trial before ...
Directory of Open Access Journals (Sweden)
Xutian Chai
2017-03-01
Full Text Available Common vetch (Vicia sativa subsp. sativa L. is a self-pollinating annual forage legume with worldwide importance. Here, we investigate the optimal number of individuals that may represent the genetic diversity of a single population, using Start Codon Targeted (SCoT markers. Two cultivated varieties and two wild accessions were evaluated using five SCoT primers, also testing different sampling sizes: 1, 2, 3, 5, 8, 10, 20, 30, 40, 50, and 60 individuals. The results showed that the number of alleles and the Polymorphism Information Content (PIC were different among the four accessions. Cluster analysis by Unweighted Pair Group Method with Arithmetic Mean (UPGMA and STRUCTURE placed the 240 individuals into four distinct clusters. The Expected Heterozygosity (HE and PIC increased along with an increase in sampling size from 1 to 10 plants but did not change significantly when the sample sizes exceeded 10 individuals. At least 90% of the genetic variation in the four germplasms was represented when the sample size was 10. Finally, we concluded that 10 individuals could effectively represent the genetic diversity of one vetch population based on the SCoT markers. This study provides theoretical support for genetic diversity, cultivar identification, evolution, and marker-assisted selection breeding in common vetch.
Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B
2017-08-15
In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of
International Nuclear Information System (INIS)
Sulaiti, H.A.; Rega, P.H.; Bradley, D.; Dahan, N.A.; Mugren, K.A.; Dosari, M.A.
2014-01-01
Correlation between grain size and activity concentrations of soils and concentrations of various radionuclides in surface and subsurface soils has been measured for samples taken in the State of Qatar by gamma-spectroscopy using a high purity germanium detector. From the obtained gamma-ray spectra, the activity concentrations of the 238U (226Ra) and /sup 232/ Th (/sup 228/ Ac) natural decay series, the long-lived naturally occurring radionuclide 40 K and the fission product radionuclide 137CS have been determined. Gamma dose rate, radium equivalent, radiation hazard index and annual effective dose rates have also been estimated from these data. In order to observe the effect of grain size on the radioactivity of soil, three grain sizes were used i.e., smaller than 0.5 mm; smaller than 1 mm and greater than 0.5 mm; and smaller than 2 mm and greater than 1 mm. The weighted activity concentrations of the 238U series nuclides in 0.5-2 mm grain size of sample numbers was found to vary from 2.5:f:0.2 to 28.5+-0.5 Bq/kg, whereas, the weighted activity concentration of 4 degree K varied from 21+-4 to 188+-10 Bq/kg. The weighted activity concentrations of 238U series and 4 degree K have been found to be higher in the finest grain size. However, for the 232Th series, the activity concentrations in the 1-2 mm grain size of one sample were found to be higher than in the 0.5-1 mm grain size. In the study of surface and subsurface soil samples, the activity concentration levels of 238 U series have been found to range from 15.9+-0.3 to 24.1+-0.9 Bq/kg, in the surface soil samples (0-5 cm) and 14.5+-0.3 to 23.6+-0.5 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 232Th series have been found to lie in the range 5.7+-0.2 to 13.7+-0.5 Bq/kg, in the surface soil samples (0-5 cm)and 4.1+-0.2 to 15.6+-0.3 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 4 degree K were in the range 150+-8 to 290+-17 Bq/kg, in the surface
Directory of Open Access Journals (Sweden)
Manan Gupta
Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates
Gupta, Manan; Joshi, Amitabh; Vidya, T N C
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the
Lorenzoni, Raymond P; Choi, Jaeun; Choueiter, Nadine F; Munjal, Iona M; Katyal, Chhavi; Stern, Kenan W D
2018-03-09
Kawasaki disease is the primary cause of acquired pediatric heart disease in developed nations. Timely diagnosis of Kawasaki disease incorporates transthoracic echocardiography for visualization of the coronary arteries. Sedation improves this visualization, but not without risks and resource utilization. To identify potential sedation criteria for suspected Kawasaki disease, we analyzed factors associated with diagnostically inadequate initial transthoracic echocardiography performed without sedation. This retrospective review of patients Kawasaki disease from 2009 to 2015 occurred at a medium-sized urban children's hospital. The primary outcome was diagnostically inadequate transthoracic echocardiography without sedation due to poor visualization of the coronary arteries, determined by review of clinical records. The associations of the primary outcome with demographics, Kawasaki disease type, laboratory data, fever, and antipyretic or intravenous immunoglobulin treatment prior to transthoracic echocardiography were analyzed. In total, 112 patients (44% female, median age 2.1 years, median BSA 0.54 m 2 ) underwent initial transthoracic echocardiography for suspected Kawasaki disease, and 99 were not sedated. Transthoracic echocardiography was diagnostically inadequate in 19 out of these 99 patients (19.2%) and was associated with age ≤ 2.0 years, weight ≤ 10.0 kg, and antipyretic use ≤ 6 hours before transthoracic echocardiography (all P Kawasaki disease. These factors should be considered when deciding which patients to sedate for initial Kawasaki disease transthoracic echocardiography. © 2018 Wiley Periodicals, Inc.
Factors associated with inadequate work ability among women in the clothing industry.
Augusto, Viviane Gontijo; Sampaio, Rosana Ferreira; Ferreira, Fabiane Ribeiro; Kirkwood, Renata Noce; César, Cibele Comini
2015-01-01
Work ability depends on a balance between individual resources and work demands. This study evaluated factors that are associated with inadequate work ability among workers in the clothing industry. We conducted a cross-sectional observational study of 306 workers in 40 small and medium-sized enterprises. We assessed work ability, individual resources, physical and psychosocial demands, and aspects of life outside work using a binary logistic regression model with hierarchical data entry. The mean work ability was 42.5 (SD=3.5); when adjusted for age, only 11% of the workers showed inadequate work ability. The final model revealed that smoking, high isometric physical load, and poor physical environmental conditions were the most significant predictors of inadequate work ability. Good working conditions and worker education must be implemented to eliminate factors that can be changed and that have a negative impact on work ability. These initiatives include anti-smoking measures, improved postures at work, and better physical environmental conditions.
Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah
2017-09-01
The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.
Inadequate Empirical Antibiotic Therapy in Hospital Acquired Pneumonia.
Dahal, S; Rijal, B P; Yogi, K N; Sherchand, J B; Parajuli, K; Parajuli, N; Pokhrel, B M
2015-01-01
Inadequate empirical antibiotic therapy for HAP is a common phenomena and one of the indicators of the poor stewardship. This study intended to analyze the efficacy of empirical antibiotics in the light of microbiological data in HAP cases. Suspected cases of HAP were followed for clinico-bacterial evidence, antimicrobial resistance and pre and post culture antibiotic use. The study was taken from February,2014 to July 2014 in department of Microbiology and department of Respiratory medicine prospectively. Data was analyzed by Microsoft Office Excel 2007. Out of 758 cases investigated, 77(10 %) cases were HAP, 65(84%) of them were culture positive and 48(74 %) were late in onset. In early onset cases, isolates were Acinetobacter 10(42%), Escherichia coli 5(21%), S.aureus 4(17%), Klebsiella 1(4%) and Pseudomonas 1(4%). From the late onset cases Acinetobacter 15(28%), Klebsiella 17(32%) and Pseudomonas 13(24%) were isolated. All Acinetobacter, 78% Klebsiella and 36% Pseudomonas isolates were multi drug resistant. Empirical therapies were inadequate in 12(70%) of early onset cases and 44(92%) of late onset type. Cephalosporins were used in 7(41%) of early onset infections but found to be adequate only in 2(12%) cases. Polymyxins were avoided empirically but after cultures were used in 9(19%) cases. Empirical antibiotics were vastly inadequate, more frequently so in late onset infections. Use of cephalosporins empirically in early onset infections and avoiding empirical use of polymyxin antibiotics in late onset infections contributed largely to the findings. Inadequate empirical regimen is a real time feedback for a practitioner to update his knowledge on the local microbiological trends.
Directory of Open Access Journals (Sweden)
Congcong Li
2014-01-01
Full Text Available Although a large number of new image classification algorithms have been developed, they are rarely tested with the same classification task. In this research, with the same Landsat Thematic Mapper (TM data set and the same classification scheme over Guangzhou City, China, we tested two unsupervised and 13 supervised classification algorithms, including a number of machine learning algorithms that became popular in remote sensing during the past 20 years. Our analysis focused primarily on the spectral information provided by the TM data. We assessed all algorithms in a per-pixel classification decision experiment and all supervised algorithms in a segment-based experiment. We found that when sufficiently representative training samples were used, most algorithms performed reasonably well. Lack of training samples led to greater classification accuracy discrepancies than classification algorithms themselves. Some algorithms were more tolerable to insufficient (less representative training samples than others. Many algorithms improved the overall accuracy marginally with per-segment decision making.
Energy Technology Data Exchange (ETDEWEB)
Damiani, Rick [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-02-08
This manual summarizes the theory and preliminary verifications of the JacketSE module, which is an offshore jacket sizing tool that is part of the Wind-Plant Integrated System Design & Engineering Model toolbox. JacketSE is based on a finite-element formulation and on user-prescribed inputs and design standards' criteria (constraints). The physics are highly simplified, with a primary focus on satisfying ultimate limit states and modal performance requirements. Preliminary validation work included comparing industry data and verification against ANSYS, a commercial finite-element analysis package. The results are encouraging, and future improvements to the code are recommended in this manual.
A project to characterize cotton gin emissions in terms of stack sampling was conducted during the 2008 through 2011 ginning seasons. The impetus behind the project was the urgent need to collect additional cotton gin emissions data to address current regulatory issues. EPA AP-42 emission factors ar...
Reburn, C J; Wynne-Edwards, K E
2000-04-01
Validation of a method for obtaining blood samples that does not change cortisol or prolactin concentrations yet allows serial blood samples to be collected from animals under anesthesia, without prior handling, from freely interacting social groups of small mammals. Results from five experiments are reported. Male dwarf hamsters (Phodopus spp.) were housed in modified home cages under continuous flow of compressed air that could be switched to isoflurane in O2 vehicle without approaching the cages. Dwarf hamsters respond to manual restraint with behavioral distress and increase in the concentration of the dominant glucocorticoid, cortisol, and decrease in prolactin concentration. Both effects are evident within one minute. In contrast, when this new method was used, neither cortisol nor prolactin changed in response to repeated sample collection (up to 8 successive samples at 2 hour intervals), prolonged isoflurane exposure, or substantial blood volume reduction (30%). Prolactin concentration was suppressed and cortisol concentration was increased in response to stimuli from other hamsters tested without anesthesia. Suppression of prolactin concentration was graded in response to the degree of stress and equaled the pharmacologic reduction caused by bromocryptine mesylate (50 microg of CB154 x 3 days). The technique is superior to alternatives for studies of behavioral endocrinology of freely interacting small mammals.
Czech Academy of Sciences Publication Activity Database
Říha, Milan; Jůza, Tomáš; Prchalová, Marie; Mrkvička, Tomáš; Čech, Martin; Draštík, Vladislav; Muška, Milan; Kratochvíl, Michal; Peterka, Jiří; Tušer, Michal; Vašek, Mojmír; Kubečka, Jan
2012-01-01
Roč. 127, September (2012), s. 56-60 ISSN 0165-7836 R&D Projects: GA MZe(CZ) QH81046 Institutional support: RVO:60077344 Keywords : quantitative sampling * gear selectivity * trawl * reservoirs Subject RIV: GL - Fishing Impact factor: 1.695, year: 2012
Wills, Johnny
2008-01-01
The planned widening of U.S. Highway 17 along the east boundary of Great Dismal Swamp National Wildlife Refuge (GDSNWR) and a lack of knowledge about the refugeâ s bear population created the need to identify potential sites for wildlife crossings and estimate the size of the refugeâ s bear population. I collected black bear hair in order to collect DNA samples to estimate population size, density, and sex ratio, and determine road crossing locations for black bears (Ursus americanus) in G...
Directory of Open Access Journals (Sweden)
Cuicui Zhang
2014-12-01
Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required
Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I
1998-05-01
A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.
Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit
2016-02-01
The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.
Belli, Sirio; Newman, Andrew B.; Ellis, Richard S.
2015-02-01
We analyze the stellar populations of a sample of 62 massive (log M */M ⊙ > 10.7) galaxies in the redshift range 1 backtrack their individual evolving trajectories on the UVJ color-color plane finding evidence for two distinct quenching routes. By using sizes measured in the previous paper of this series, we confirm that the largest galaxies are indeed among the youngest at a given redshift. This is consistent with some contribution to the apparent growth from recent arrivals, an effect often called progenitor bias. However, we calculate that recently quenched objects can only be responsible for about half the increase in average size of quiescent galaxies over a 1.5 Gyr period, corresponding to the redshift interval 1.25 < z < 2. The remainder of the observed size evolution arises from a genuine growth of long-standing quiescent galaxies.
Ahmad, Sharifah Mumtazah Syed; Ling, Loo Yim; Anwar, Rina Md; Faudzi, Masyura Ahmad; Shakil, Asma
2013-05-01
This article presents an analysis of handwritten signature dynamics belonging to two authentication groups, namely genuine and forged signature samples. Genuine signatures are initially classified based on their relative size, graphical complexity, and legibility as perceived by human examiners. A pool of dynamic features is then extracted for each signature sample in the two groups. A two-way analysis of variance (ANOVA) is carried out to investigate the effects and the relationship between the perceived classifications and the authentication groups. Homogeneity of variance was ensured through Bartlett's test prior to ANOVA testing. The results demonstrated that among all the investigated dynamic features, pen pressure is the most distinctive which is significantly different for the two authentication groups as well as for the different perceived classifications. In addition, all the relationships investigated, namely authenticity group versus size, graphical complexity, and legibility, were found to be positive for pen pressure. © 2013 American Academy of Forensic Sciences.
2015-07-01
that they may react with oxygen when opened in air and may be a safety issue. Nanomaterials that have liquid (water, oil, etc.) either in their...be further agitated in the ultrasonic bath to separate particles. It should be noted that excessive ultrasonic agitation can damage samples...Society 43:56–61. ____. 2008. Assessment of particle sizing methods applied to agglomerated nanoscale tin oxide ( SnO2 ). Journal of the Australian Ceramic
International Nuclear Information System (INIS)
Zorina, M.V.; Mironov, V.L.; Mironov, S.V.
2005-01-01
A model is developed, which enables computation of the the angular dependences of X-ray reflection, with taking into account the finiteness of sample sizes and the diffractometer alignment errors. It is shown that the angular dependences of refraction for glass and quartz wafers calculated with account of possible errors of the diffractometer optical system alignment are in good agreement with the experimental curves in the entire range of angles [ru
International Nuclear Information System (INIS)
Reiser, I; Lu, Z
2014-01-01
Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions included two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task
Heo, Moonseong; Kim, Yongman; Xue, Xiaonan; Kim, Mimi Y
2010-02-10
It is often anticipated in a longitudinal cluster randomized clinical trial (cluster-RCT) that the course of outcome over time will diverge between intervention arms. In these situations, testing the significance of a local intervention effect at the end of the trial may be more clinically relevant than evaluating overall mean differences between treatment groups. In this paper, we present a closed-form power function for detecting this local intervention effect based on maximum likelihood estimates from a mixed-effects linear regression model for three-level continuous data. Sample size requirements for the number of units at each data level are derived from the power function. The power function and the corresponding sample size requirements are verified by a simulation study. Importantly, it is shown that sample size requirements computed with the proposed power function are smaller than that required when testing group mean difference using data only at the end of trial and ignoring the course of outcome over the entire study period. (c) 2009 John Wiley & Sons, Ltd.
Rakow, Tobias; El Deeb, Sami; Hahne, Thomas; El-Hady, Deia Abd; AlBishri, Hassan M; Wätzig, Hermann
2014-09-01
In this study, size-exclusion chromatography and high-resolution atomic absorption spectrometry methods have been developed and evaluated to test the stability of proteins during sample pretreatment. This especially includes different storage conditions but also adsorption before or even during the chromatographic process. For the development of the size exclusion method, a Biosep S3000 5 μm column was used for investigating a series of representative model proteins, namely bovine serum albumin, ovalbumin, monoclonal immunoglobulin G antibody, and myoglobin. Ambient temperature storage was found to be harmful to all model proteins, whereas short-term storage up to 14 days could be done in an ordinary refrigerator. Freezing the protein solutions was always complicated and had to be evaluated for each protein in the corresponding solvent. To keep the proteins in their native state a gentle freezing temperature should be chosen, hence liquid nitrogen should be avoided. Furthermore, a high-resolution continuum source atomic absorption spectrometry method was developed to observe the adsorption of proteins on container material and chromatographic columns. Adsorption to any container led to a sample loss and lowered the recovery rates. During the pretreatment and high-performance size-exclusion chromatography, adsorption caused sample losses of up to 33%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Williams, David Keith; Bursac, Zoran
2014-01-01
Commonly when designing studies, researchers propose to measure several independent variables in a regression model, a subset of which are identified as the main variables of interest while the rest are retained in a model as covariates or confounders. Power for linear regression in this setting can be calculated using SAS PROC POWER. There exists a void in estimating power for the logistic regression models in the same setting. Currently, an approach that calculates power for only one variable of interest in the presence of other covariates for logistic regression is in common use and works well for this special case. In this paper we propose three related algorithms along with corresponding SAS macros that extend power estimation for one or more primary variables of interest in the presence of some confounders. The three proposed empirical algorithms employ likelihood ratio test to provide a user with either a power estimate for a given sample size, a quick sample size estimate for a given power, and an approximate power curve for a range of sample sizes. A user can specify odds ratios for a combination of binary, uniform and standard normal independent variables of interest, and or remaining covariates/confounders in the model, along with a correlation between variables. These user friendly algorithms and macro tools are a promising solution that can fill the void for estimation of power for logistic regression when multiple independent variables are of interest, in the presence of additional covariates in the model.
Energy Technology Data Exchange (ETDEWEB)
Plionis, Alexander A [Los Alamos National Laboratory; Peterson, Dominic S [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory; Lamont, Stephen P [Los Alamos National Laboratory
2009-01-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
Frances, Colleen Elizabeth
Fires are responsible for the loss of thousands of lives and billions of dollars in property damage each year in the United States. Flame retardants can assist in the prevention of fires through mechanisms which either prevent or greatly inhibit flame spread and development. In this study samples of both brominated and non-brominated polystyrene were tested in the Milligram-scale Flaming Calorimeter and images captured with two DSL-R cameras were analyzed to determine flame temperatures through use of a non-intrusive method. Based on the flame temperature measurement results, a better understanding of the gas phase mechanisms of flame retardants may result, as temperature is an important diagnostic in the study of fire and combustion. Measurements taken at 70% of the total flame height resulted in average maximum temperatures of about 1656 K for polystyrene and about 1614 K for brominated polystyrene, suggesting that the polymer flame retardant may reduce flame temperatures.
Raack, J.; Dennis, R.; Balme, M. R.; Taj-Eddine, K.; Ori, G. G.
2017-12-01
Dust devils are small vertical convective vortices which occur on Earth and Mars [1] but their internal structure is almost unknown. Here we report on in situ samples of two active dust devils in the Sahara Desert in southern Morocco [2]. For the sampling we used a 4 m high aluminium pipe with sampling areas made of removable adhesive tape. We took samples between 0.1-4 m with a sampling interval of 0.5 m and between 0.5-2 m with an interval of 0.25 m, respectively. The maximum diameter of all particles of the different sampling heights were then measured using an optical microscope to gain vertical grain size distributions and relative particle loads. Our measurements imply that both dust devils have a general comparable internal structure despite their different strengths and dimensions which indicates that the dust devils probably represents the surficial grain size distribution they move over. The particle sizes within the dust devils decrease nearly exponential with height which is comparable to results by [3]. Furthermore, our results show that about 80-90 % of the total particle load were lifted only within the first meter, which is a direct evidence for the existence of a sand skirt. If we assume that grains with a diameter dust coverage is larger [5], although the atmosphere can only suspend smaller grain sizes ( dust devils each day which were up to several hundred meters tall and had diameters of several tens of meters. This implies a much higher input of fine grained material into the atmosphere (which will have an influence on the climate, weather, and human health [7]) compared to the relative small dust devils sampled during our field campaign. [1] Thomas and Gierasch (1985) Science 230 [2] Raack et al. (2017) Astrobiology [3] Oke et al. (2007) J. Arid Environ. 71 [4] Balme and Greeley (2006) Rev. Geophys. 44 [5] Christensen (1986) JGR 91 [6] Newman et al. (2002) JGR 107 [7] Gillette and Sinclair (1990) Atmos. Environ. 24
Kolak, Jon; Hackley, Paul C.; Ruppert, Leslie F.; Warwick, Peter D.; Burruss, Robert
2015-01-01
To investigate the potential for mobilizing organic compounds from coal beds during geologic carbon dioxide (CO2) storage (sequestration), a series of solvent extractions using dichloromethane (DCM) and using supercritical CO2 (40 °C and 10 MPa) were conducted on a set of coal samples collected from Louisiana and Ohio. The coal samples studied range in rank from lignite A to high volatile A bituminous, and were characterized using proximate, ultimate, organic petrography, and sorption isotherm analyses. Sorption isotherm analyses of gaseous CO2 and methane show a general increase in gas storage capacity with coal rank, consistent with findings from previous studies. In the solvent extractions, both dry, ground coal samples and moist, intact core plug samples were used to evaluate effects of variations in particle size and moisture content. Samples were spiked with perdeuterated surrogate compounds prior to extraction, and extracts were analyzed via gas chromatography–mass spectrometry. The DCM extracts generally contained the highest concentrations of organic compounds, indicating the existence of additional hydrocarbons within the coal matrix that were not mobilized during supercritical CO2 extractions. Concentrations of aliphatic and aromatic compounds measured in supercritical CO2 extracts of core plug samples generally are lower than concentrations in corresponding extracts of dry, ground coal samples, due to differences in particle size and moisture content. Changes in the amount of extracted compounds and in surrogate recovery measured during consecutive supercritical CO2extractions of core plug samples appear to reflect the transition from a water-wet to a CO2-wet system. Changes in coal core plug mass during supercritical CO2 extraction range from 3.4% to 14%, indicating that a substantial portion of coal moisture is retained in the low-rank coal samples. Moisture retention within core plug samples, especially in low-rank coals, appears to inhibit
Deferasirox pharmacokinetics in patients with adequate versus inadequate response
Chirnomas, Deborah; Smith, Amber Lynn; Braunstein, Jennifer; Finkelstein, Yaron; Pereira, Luis; Bergmann, Anke K.; Grant, Frederick D.; Paley, Carole; Shannon, Michael
2009-01-01
Tens of thousands of transfusion-dependent (eg, thalassemia) patients worldwide suffer from chronic iron overload and its potentially fatal complications. The oral iron chelator deferasirox has become commercially available in many countries since 2006. Although this alternative to parenteral deferoxamine has been a major advance for patients with transfusional hemosiderosis, a proportion of patients have suboptimal response to the maximum approved doses (30 mg/kg per day), and do not achieve negative iron balance. We performed a prospective study of oral deferasirox pharmacokinetics (PK), comparing 10 transfused patients with inadequate deferasirox response (rising ferritin trend or rising liver iron on deferasirox doses > 30 mg/kg per day) with control transfusion-dependent patients (n = 5) with adequate response. Subjects were admitted for 4 assessments: deferoxamine infusion and urinary iron measurement to assess readily chelatable iron; quantitative hepatobiliary scintigraphy to assess hepatic uptake and excretion of chelate; a 24-hour deferasirox PK study following a single 35-mg/kg dose of oral deferasirox; and pharmacogenomic analysis. Patients with inadequate response to deferasirox had significantly lower systemic drug exposure compared with control patients (P deferasirox must be determined. This trial has been registered at http://www.clinicaltrials.gov under identifier NCT00749515. PMID:19724055
Directory of Open Access Journals (Sweden)
Kristian Thorlund
2010-04-01
Full Text Available Kristian Thorlund1,2, Aranka Anema3, Edward Mills41Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada; 2The Copenhagen Trial Unit, Centre for Clinical Intervention Research, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark; 3British Columbia Centre for Excellence in HIV/AIDS, University of British Columbia, Vancouver, British Columbia, Canada; 4Faculty of Health Sciences, University of Ottawa, Ottawa, Ontario, CanadaObjective: To illustrate the utility of statistical monitoring boundaries in meta-analysis, and provide a framework in which meta-analysis can be interpreted according to the adequacy of sample size. To propose a simple method for determining how many patients need to be randomized in a future trial before a meta-analysis can be deemed conclusive.Study design and setting: Prospective meta-analysis of randomized clinical trials (RCTs that evaluated the effectiveness of isoniazid chemoprophylaxis versus placebo for preventing the incidence of tuberculosis disease among human immunodeficiency virus (HIV-positive individuals testing purified protein derivative negative. Assessment of meta-analysis precision using trial sequential analysis (TSA with LanDeMets monitoring boundaries. Sample size determination for a future trials to make the meta-analysis conclusive according to the thresholds set by the monitoring boundaries.Results: The meta-analysis included nine trials comprising 2,911 trial participants and yielded a relative risk of 0.74 (95% CI, 0.53–1.04, P = 0.082, I2 = 0%. To deem the meta-analysis conclusive according to the thresholds set by the monitoring boundaries, a future RCT would need to randomize 3,800 participants.Conclusion: Statistical monitoring boundaries provide a framework for interpreting meta-analysis according to the adequacy of sample size and project the required sample size for a future RCT to make a meta-analysis conclusive
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate
Pajević, Tina; Glišić, Branislav
2017-05-01
Anthropological studies have reported that tooth size decreases in the context of diet changes. Some investigations have found a reverse trend in tooth size from the prehistoric to the modern times. The aims of this study were to analyze tooth size in skeletal samples from Mesolithic-Neolithic Age, Bronze Age, and Roman to Medieval times to determine sex differences and establish a temporal trend in tooth size in the aforementioned periods. Well-preserved permanent teeth were included in the investigation. The mesiodistal (MD) diameter of all teeth and buccolingual (BL) diameter of the molars were measured. Effects of sex and site were tested by one-way ANOVA, and the combined effect of these factors was analyzed by UNIANOVA. Sexual dimorphism was present in the BL diameters of all molars and MD diameters of the upper first and the lower third molar. The lower canine was the most dimorphic tooth in the anterior region. The MD diameter of most teeth showed no significant difference between the groups, (sample from: Mesolithic-Neolithic Age-group 1; Bronze Age-group 2; Roman times-group 3; Medieval times-group 4), whereas the BL diameters of the upper second and the lower first molar were the largest in the first group. Multiple comparisons revealed a decrease in the BL diameter of the upper second and the lower first molar from the first to the later groups. Lower canine MD diameter exhibited an increase in the fourth group compared to the second group. On the basis of the MD diameter, a temporal trend could not be observed for most of the teeth. The lower canine exhibited an increase in the MD diameter from the prehistoric to the Medieval times. Changes of BL diameter were more homogeneous, suggesting that the temporal trend of molar size decreased from the Mesolithic-Neolithic to Medieval times in Serbia. Copyright © 2017. Published by Elsevier Ltd.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Wu, Hao-Yi; Hahn, Oliver; Wechsler, Risa H.; Mao, Yao-Yuan; Behroozi, Peter S.
2013-02-01
We present the first results from the RHAPSODY cluster re-simulation project: a sample of 96 "zoom-in" simulations of dark matter halos of 1014.8 ± 0.05 h -1 M ⊙, selected from a 1 h -3 Gpc3 volume. This simulation suite is the first to resolve this many halos with ~5 × 106 particles per halo in the cluster mass regime, allowing us to statistically characterize the distribution of and correlation between halo properties at fixed mass. We focus on the properties of the main halos and how they are affected by formation history, which we track back to z = 12, over five decades in mass. We give particular attention to the impact of the formation history on the density profiles of the halos. We find that the deviations from the Navarro-Frenk-White (NFW) model and the Einasto model depend on formation time. Late-forming halos tend to have considerable deviations from both models, partly due to the presence of massive subhalos, while early-forming halos deviate less but still significantly from the NFW model and are better described by the Einasto model. We find that the halo shapes depend only moderately on formation time. Departure from spherical symmetry impacts the density profiles through the anisotropic distribution of massive subhalos. Further evidence of the impact of subhalos is provided by analyzing the phase-space structure. A detailed analysis of the properties of the subhalo population in RHAPSODY is presented in a companion paper.
Energy Technology Data Exchange (ETDEWEB)
Wu, Hao-Yi; Hahn, Oliver; Wechsler, Risa H.; Mao, Yao-Yuan; Behroozi, Peter S., E-mail: hywu@umich.edu [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, Stanford University, Stanford, CA 94305 (United States)
2013-02-15
We present the first results from the RHAPSODY cluster re-simulation project: a sample of 96 'zoom-in' simulations of dark matter halos of 10{sup 14.8{+-}0.05} h {sup -1} M {sub Sun }, selected from a 1 h {sup -3} Gpc{sup 3} volume. This simulation suite is the first to resolve this many halos with {approx}5 Multiplication-Sign 10{sup 6} particles per halo in the cluster mass regime, allowing us to statistically characterize the distribution of and correlation between halo properties at fixed mass. We focus on the properties of the main halos and how they are affected by formation history, which we track back to z = 12, over five decades in mass. We give particular attention to the impact of the formation history on the density profiles of the halos. We find that the deviations from the Navarro-Frenk-White (NFW) model and the Einasto model depend on formation time. Late-forming halos tend to have considerable deviations from both models, partly due to the presence of massive subhalos, while early-forming halos deviate less but still significantly from the NFW model and are better described by the Einasto model. We find that the halo shapes depend only moderately on formation time. Departure from spherical symmetry impacts the density profiles through the anisotropic distribution of massive subhalos. Further evidence of the impact of subhalos is provided by analyzing the phase-space structure. A detailed analysis of the properties of the subhalo population in RHAPSODY is presented in a companion paper.
Gillmeister, Michael P; Tomiya, Noboru; Jacobia, Scott J; Lee, Yuan C; Gorfien, Stephen F; Betenbaugh, Michael J
2009-12-01
Existing HPLC methods can provide detailed structure and isomeric information, but are often slow and require large initial sample sizes. In this study, a previously established two-dimensional HPLC technique was adapted to a two-step identification method for smaller sample sizes. After cleavage from proteins, purification, and fluorescent labeling, glycans were analyzed on a 2-mm reverse phase HPLC column on a conventional HPLC and spotted onto a MALDI-TOF MS plate using an automated plate spotter to determine molecular weights. A direct correlation was found for 25 neutral oligosaccharides between the 2-mm Shim-Pack VP-ODS HPLC column (Shimadzu) and the 6-mm CLC-ODS column (Shimadzu) of the standard two- and three-dimensional methods. The increased throughput adaptations allowed a 100-fold reduction in required amounts of starting protein. The entire process can be carried out in 2-3 days for a large number of samples as compared to 1-2 weeks per sample for previous two-dimensional HPLC methods. The modified method was verified by identifying N-glycan structures, including specifying two different galactosylated positional isomers, of an IgG antibody from human sera samples. Analysis of tissue plasminogen activator (t-PA) from CHO cell cultures under varying culture conditions illustrated how the method can identify changes in oligosaccharide structure in the presence of different media environments. Raising glutamine concentrations or adding ammonia directly to the culture led to decreased galactosylation, while substituting GlutaMAX-I, a dipeptide of L-alanine and L-glutamine, resulted in structures with more galactosylation. This modified system will enable glycoprofiling of smaller glycoprotein samples in a shorter time period and allow a more rapid evaluation of the effects of culture conditions on expressed protein glycosylation.
International Nuclear Information System (INIS)
Eze, C.U.; Ilounoh, C.E.; Irurhe, N.K.; Akpochafor, M.O.
2016-01-01
Background: Congenital heart disease (CHD) is one of the most common congenital anomalies while prenatal ultrasound screening is a necessity even in low risk populations. Aim: To measure fetal ascending aortic diameter (AAD) between 18 and 38 weeks of gestation in order to provide a normal reference data for the population studied. Method: In the prospective cross sectional study, a sample of 300 healthy pregnant women was selected to undergo trans-abdominal echocardiography. Data Collection: Logiq 3 ultrasound machine fitted with a 3.5 MHz–5 MHz variable curvilinear transducer was used for data collection. Results: Mean AAD was 4.59 ± 1.56 mm. There was a linear relationship between AAD and gestational age (GA) while there was a strong positive correlation between AAD and GA (r = 0.9915). Mean AAD for male and female fetuses was 4.60 ± 0.57 mm and 4.58 ± 0.55 mm respectively and the difference in mean AAD was not significant (p = 0.8420) between both sexes. Mean AAD in the study was significantly different from the mean AAD in a European and Asian studies (p = 0.001). Conclusion: Trans-abdominal echocardiography carried out between the 18th and 38th week of gestation appears useful in screening for CHD especially if screening is performed by an experienced sonographer using a high resolution ultrasound machine. - Highlights: • The mean ascending aortic diameter (AAD) in the population is 4.59 ± 1.56 mm. • The mean AAD for male and female fetuses is 4.60 ± 0.57 mm and 4.58 ± 0.55 mm respectively. • No significant difference in mean AAD in the population with respect to sex (p = 0.8420). • Mean AAD in the population is significantly different from European and Asian data (p = 0.001). • Trans-abdominal echocardiography performed for CHD screening appears useful.
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Directory of Open Access Journals (Sweden)
AR Silva
2011-03-01
Full Text Available O objetivo deste estudo foi determinar o tamanho apropriado de amostra por meio da técnica de simulação de subamostras para a caracterização de variáveis morfológicas de frutos de oito acessos (variedades de quatro espécies de pimenteira (Capsicum spp., que foram cultivadas em área experimental da UFPB. Foram analisados tamanhos reduzidos de amostras, variando de 3 a 29 frutos, com 100 amostras para cada tamanho simulado em um processo de amostragem com reposição de dados. Realizou-se análise de variância para os números mínimos de frutos por amostra que representasse a amostra de referência (30 frutos em cada variável estudada, constituindo um delineamento experimental inteiramente casualizado com duas repetições, onde cada dado representou o primeiro número de frutos na amostra simulada que não apresentou nenhum valor fora do intervalo de confiança da amostra de referência e que assim manteve-se até a última subamostra da simulação. A técnica de simulação utilizada permitiu obter, com a mesma precisão da amostra de 30 frutos, reduções do tamanho amostral em torno de 50%, dependendo da variável morfológica, não havendo diferenças entre os acessos.The appropriate sample size for the evaluation of morphological fruit traits of pepper was evaluated through a technique of simulation of subsamples. The treatments consisted of eight accessions of four pepper species (Capsicum spp., cultivated in an experimental area of the Universidade Federal da Paraíba. Small samples, ranging from 3 to 29 fruits were evaluated. For each sample size, 100 subsamples were simulated with data replacement. The data were submitted to analysis of variance, in a complete randomized design, for the minimum number of fruits per sample. Each collected data consisted of the first number of fruits in the simulated sample without values out of the confidence interval. This procedure was done up to the last subsample simulation. The
Internet addiction: reappraisal of an increasingly inadequate concept.
Starcevic, Vladan; Aboujaoude, Elias
2017-02-01
This article re-examines the popular concept of Internet addiction, discusses the key problems associated with it, and proposes possible alternatives. The concept of Internet addiction is inadequate for several reasons. Addiction may be a correct designation only for the minority of individuals who meet the general criteria for addiction, and it needs to be better demarcated from various patterns of excessive or abnormal use. Addiction to the Internet as a medium does not exist, although the Internet as a medium may play an important role in making some behaviors addictive. The Internet can no longer be separated from other potentially overused media, such as text messaging and gaming platforms. Internet addiction is conceptually too heterogeneous because it pertains to a variety of very different behaviors. Internet addiction should be replaced by terms that refer to the specific behaviors (eg, gaming, gambling, or sexual activity), regardless of whether these are performed online or offline.
Evidence Report: Risk of Inadequate Human-Computer Interaction
Holden, Kritina; Ezer, Neta; Vos, Gordon
2013-01-01
Human-computer interaction (HCI) encompasses all the methods by which humans and computer-based systems communicate, share information, and accomplish tasks. When HCI is poorly designed, crews have difficulty entering, navigating, accessing, and understanding information. HCI has rarely been studied in an operational spaceflight context, and detailed performance data that would support evaluation of HCI have not been collected; thus, we draw much of our evidence from post-spaceflight crew comments, and from other safety-critical domains like ground-based power plants, and aviation. Additionally, there is a concern that any potential or real issues to date may have been masked by the fact that crews have near constant access to ground controllers, who monitor for errors, correct mistakes, and provide additional information needed to complete tasks. We do not know what types of HCI issues might arise without this "safety net". Exploration missions will test this concern, as crews may be operating autonomously due to communication delays and blackouts. Crew survival will be heavily dependent on available electronic information for just-in-time training, procedure execution, and vehicle or system maintenance; hence, the criticality of the Risk of Inadequate HCI. Future work must focus on identifying the most important contributing risk factors, evaluating their contribution to the overall risk, and developing appropriate mitigations. The Risk of Inadequate HCI includes eight core contributing factors based on the Human Factors Analysis and Classification System (HFACS): (1) Requirements, policies, and design processes, (2) Information resources and support, (3) Allocation of attention, (4) Cognitive overload, (5) Environmentally induced perceptual changes, (6) Misperception and misinterpretation of displayed information, (7) Spatial disorientation, and (8) Displays and controls.
Energy Technology Data Exchange (ETDEWEB)
Belli, Sirio; Ellis, Richard S. [Department of Astronomy, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Newman, Andrew B. [The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101 (United States)
2015-02-01
We analyze the stellar populations of a sample of 62 massive (log M {sub *}/M {sub ☉} > 10.7) galaxies in the redshift range 1 < z < 1.6, with the main goal of investigating the role of recent quenching in the size growth of quiescent galaxies. We demonstrate that our sample is not biased toward bright, compact, or young galaxies, and thus is representative of the overall quiescent population. Our high signal-to-noise ratio Keck/LRIS spectra probe the rest-frame Balmer break region that contains important absorption line diagnostics of recent star formation activity. We obtain improved measures of the various stellar population parameters, including the star formation timescale τ, age, and dust extinction, by fitting templates jointly to both our spectroscopic and broadband photometric data. We identify which quiescent galaxies were recently quenched and backtrack their individual evolving trajectories on the UVJ color-color plane finding evidence for two distinct quenching routes. By using sizes measured in the previous paper of this series, we confirm that the largest galaxies are indeed among the youngest at a given redshift. This is consistent with some contribution to the apparent growth from recent arrivals, an effect often called progenitor bias. However, we calculate that recently quenched objects can only be responsible for about half the increase in average size of quiescent galaxies over a 1.5 Gyr period, corresponding to the redshift interval 1.25 < z < 2. The remainder of the observed size evolution arises from a genuine growth of long-standing quiescent galaxies.
Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki
2017-02-01
In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.
Czuba, Jonathan A.; Straub, Timothy D.; Curran, Christopher A.; Landers, Mark N.; Domanski, Marian M.
2015-01-01
Laser-diffraction technology, recently adapted for in-stream measurement of fluvial suspended-sediment concentrations (SSCs) and particle-size distributions (PSDs), was tested with a streamlined (SL), isokinetic version of the Laser In-Situ Scattering and Transmissometry (LISST) for measuring volumetric SSCs and PSDs ranging from 1.8-415 µm in 32 log-spaced size classes. Measured SSCs and PSDs from the LISST-SL were compared to a suite of 22 datasets (262 samples in all) of concurrent suspended-sediment and streamflow measurements using a physical sampler and acoustic Doppler current profiler collected during 2010-12 at 16 U.S. Geological Survey streamflow-gaging stations in Illinois and Washington (basin areas: 38 – 69,264 km2). An unrealistically low computed effective density (mass SSC / volumetric SSC) of 1.24 g/ml (95% confidence interval: 1.05-1.45 g/ml) provided the best-fit value (R2 = 0.95; RMSE = 143 mg/L) for converting volumetric SSC to mass SSC for over 2 orders of magnitude of SSC (12-2,170 mg/L; covering a substantial range of SSC that can be measured by the LISST-SL) despite being substantially lower than the sediment particle density of 2.67 g/ml (range: 2.56-2.87 g/ml, 23 samples). The PSDs measured by the LISST-SL were in good agreement with those derived from physical samples over the LISST-SL's measureable size range. Technical and operational limitations of the LISST-SL are provided to facilitate the collection of more accurate data in the future. Additionally, the spatial and temporal variability of SSC and PSD measured by the LISST-SL is briefly described to motivate its potential for advancing our understanding of suspended-sediment transport by rivers.
Inadequate description of educational interventions in ongoing randomized controlled trials
Directory of Open Access Journals (Sweden)
Pino Cécile
2012-05-01
Full Text Available Abstract Background The registration of clinical trials has been promoted to prevent publication bias and increase research transparency. Despite general agreement about the minimum amount of information needed for trial registration, we lack clear guidance on descriptions of non-pharmacologic interventions in trial registries. We aimed to evaluate the quality of registry descriptions of non-pharmacologic interventions assessed in ongoing randomized controlled trials (RCTs of patient education. Methods On 6 May 2009, we searched for all ongoing RCTs registered in the 10 trial registries accessible through the World Health Organization International Clinical Trials Registry Platform. We included trials evaluating an educational intervention (that is, designed to teach or train patients about their own health and dedicated to participants, their family members or home caregivers. We used a standardized data extraction form to collect data related to the description of the experimental intervention, the centers, and the caregivers. Results We selected 268 of 642 potentially eligible studies and appraised a random sample of 150 records. All selected trials were registered in 4 registers, mainly ClinicalTrials.gov (61%. The median [interquartile range] target sample size was 205 [100 to 400] patients. The comparator was mainly usual care (47% or active treatment (47%. A minority of records (17%, 95% CI 11 to 23% reported an overall adequate description of the intervention (that is, description that reported the content, mode of delivery, number, frequency, duration of sessions and overall duration of the intervention. Further, for most reports (59%, important information about the content of the intervention was missing. The description of the mode of delivery of the intervention was reported for 52% of studies, the number of sessions for 74%, the frequency of sessions for 58%, the duration of each session for 45% and the overall duration for 63
Micronutrient Intake Is Inadequate for a Sample of Pregnant African-American Women.
Groth, Susan W; Stewart, Patricia A; Ossip, Deborah J; Block, Robert C; Wixom, Nellie; Fernandez, I Diana
2017-04-01
Micronutrient intake is critical for fetal development and positive pregnancy outcomes. Little is known about the adequacy of micronutrient intake in pregnant African-American women. To describe nutrient sufficiency and top food groups contributing to dietary intake of select micronutrients in low-income pregnant African-American women and determine whether micronutrient intake varies with early pregnancy body mass index (BMI) and/or gestational weight gain. Secondary analysis of data collected in a cohort study of pregnant African-American women. A total of 93 women aged 18 to 36 years, pregnant, with early pregnancy BMIs ≥18.5 and women with dietary intakes below Estimated Average Requirement (EAR) or Adequate Intake (AI) for vitamin D, folate, iron, calcium, and choline throughout pregnancy. Top food groups from which women derived these micronutrients was also determined. Descriptive statistics included means, standard deviations, and percentages. Percent of women reaching EAR or AI was calculated. The χ 2 test was used to assess micronutrient intake differences based on early pregnancy BMI and gestational weight gain. A large percentage of pregnant women did not achieve the EAR or AI from dietary sources alone; EAR for folate (66%), vitamin D (100%), iron (89%), and AI for choline (100%). Mean micronutrient intake varied throughout pregnancy. Top food sources included reduced-fat milk, eggs, and mixed egg dishes, pasta dishes, and ready-to-eat cereal. The majority of study participants had dietary micronutrient intake levels below EAR/AI throughout pregnancy. Findings suggest that practitioners should evaluate dietary adequacy in women to avoid deficits in micronutrient intake during pregnancy. Top food sources of these micronutrients can be considered when assisting women in improving dietary intake. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Initial treatment of severe malaria in children is inadequate – a study ...
African Journals Online (AJOL)
-medicated at home. Initial consultations are at primary local health facilities where less effective drugs are prescribed at inadequate dosages. Recommended ACTs were also often prescribed at inadequate dosages. Education in the use of ...
Sánchez-Pimienta, Tania G; López-Olmedo, Nancy; Rodríguez-Ramírez, Sonia; García-Guerra, Armando; Rivera, Juan A; Carriquiry, Alicia L; Villalpando, Salvador
2016-09-01
A National Health and Nutrition Survey (ENSANUT) conducted in Mexico in 1999 identified a high prevalence of inadequate mineral intakes in the population by using 24-h recall questionnaires. However, the 1999 survey did not adjust for within-person variance. The 2012 ENSANUT implemented a more up-to-date 24-h recall methodology to estimate usual intake distributions and prevalence of inadequate intakes. We examined the distribution of usual intakes and prevalences of inadequate intakes of calcium, iron, magnesium, and zinc in the Mexican population in groups defined according to sex, rural or urban area, geographic region of residence, and socioeconomic status (SES). We used dietary intake data obtained through the 24-h recall automated multiple-pass method for 10,886 subjects as part of ENSANUT 2012. A second measurement on a nonconsecutive day was obtained for 9% of the sample. Distributions of usual intakes of the 4 minerals were obtained by using the Iowa State University method, and the prevalence of inadequacy was estimated by using the Institute of Medicine's Estimated Average Requirement cutoff. Calcium inadequacy was 25.6% in children aged 1-4 y and 54.5-88.1% in subjects >5 y old. More than 45% of subjects >5 y old had an inadequate intake of iron. Less than 5% of children aged 12 y had inadequate intakes of magnesium, whereas zinc inadequacy ranged from <10% in children aged <12 y to 21.6% in men aged ≥20 y. Few differences were found between rural and urban areas, regions, and tertiles of SES. Intakes of calcium, iron, magnesium, and zinc are inadequate in the Mexican population, especially among adolescents and adults. These results suggest a public health concern that must be addressed. © 2016 American Society for Nutrition.
Schaepe, Nathaniel J.; Coleman, Anthony M.; Zelt, Ronald B.
2018-04-06
The U.S. Geological Survey (USGS), in cooperation with the U.S. Army Corps of Engineers, monitored a sediment release by Nebraska Public Power District from Spencer Dam located on the Niobrara River near Spencer, Nebraska, during the fall of 2014. The accumulated sediment behind Spencer Dam ordinarily is released semiannually; however, the spring 2014 release was postponed until the fall. Because of the postponement, the scheduled fall sediment release would consist of a larger volume of sediment. The larger than normal sediment release expected in fall 2014 provided an opportunity for the USGS and U.S. Army Corps of Engineers to improve the understanding of sediment transport during reservoir sediment releases. A primary objective was to collect continuous suspended-sediment data during the first days of the sediment release to document rapid changes in sediment concentrations. For this purpose, the USGS installed a laser-diffraction particle-size analyzer at a site near the outflow of the dam to collect continuous suspended-sediment data. The laser-diffraction particle-size analyzer measured volumetric particle concentration and particle-size distribution from October 1 to 2 (pre-sediment release) and October 5 to 9 (during sediment release). Additionally, the USGS manually collected discrete suspended-sediment and bed-sediment samples before, during, and after the sediment release. Samples were collected at two sites upstream from Spencer Dam and at three bridges downstream from Spencer Dam. The resulting datasets and basic metadata associated with the datasets were published as a data release; this report provides additional documentation about the data collection methods and the quality of the data.
Newcastle disease virus outbreaks: vaccine mismatch or inadequate application?
Dortmans, Jos C F M; Peeters, Ben P H; Koch, Guus
2012-11-09
Newcastle disease (ND) is one of the most important diseases of poultry, and may cause devastating losses in the poultry industry worldwide. Its causative agent is Newcastle disease virus (NDV), also known as avian paramyxovirus type 1. Many countries maintain a stringent vaccination policy against ND, but there are indications that ND outbreaks can still occur despite intensive vaccination. It has been argued that this may be due to antigenic divergence between the vaccine strains and circulating field strains. Here we present the complete genome sequence of a highly virulent genotype VII virus (NL/93) obtained from vaccinated poultry during an outbreak of ND in the Netherlands in 1992-1993. Using this strain, we investigated whether the identified genetic evolution of NDV is accompanied by antigenic evolution. In this study we show that a live vaccine that is antigenically adapted to match the genotype VII NL/93 outbreak strain does not provide increased protection compared to a classic genotype II live vaccine. When challenged with the NL/93 strain, chickens vaccinated with a classic vaccine were completely protected against clinical disease and mortality and virus shedding was significantly reduced, even with a supposedly suboptimal vaccine dose. These results suggest that it is not antigenic variation but rather poor flock immunity due to inadequate vaccination practices that may be responsible for outbreaks and spreading of virulent NDV field strains. Copyright © 2012 Elsevier B.V. All rights reserved.
Metabolic regulation during sport events: factual interpretations and inadequate allegations
Directory of Open Access Journals (Sweden)
Jacques Remy Poortmans
2013-09-01
Full Text Available Different fuels are available to generate ATP for muscle activities during sport events. Glycogen from striated muscles and liver stores may be converted to lactic acid or almost completely oxidized to carbon dioxide (CO2, triacylglycerol within the muscle itself and fatty acids from adipose tissue could be converted to CO2 in acting muscles, some free amino acids can be released within the muscle itself and from intestinal stores to sustain the amount of ATP generation indispensable for muscle contraction. All single biochemical reactions, but one, need one or several enzymes to activate the conversion of a substrate into a product. The energy transformation in biochemical reactions is led by application of so-called free energy. Reversible and non-reversible reactions within a metabolic pathway are dependent on specific enzymes near or far from equilibrium. Allosteric enzymes are regulatory enzymes that provide the direction in the pathway. A regulatory enzyme is either activated or inhibited by small regulators (ligands. A reversible substrate cycle between A and B is catalyzed by two enzymes with different fluxes. The need of ATP production for muscle contraction is under the leadership of regulatory enzymes and available substrate stores. The improvement of adapted metabolic reactions under sport training depends on the appropriate increase of regulatory enzymes within the glycolytic and oxidative pathways. The amount of some specific enzymes is increased by training in order to improve the maximum activity of the metabolic pathway. Unfortunately, several publications do not precisely implicate the appropriate enzyme(s to explain or reject the adaptation induced by the training schedule. A few examples will illustrate the factual interpretation and the inadequate allegation.
The association of xerostomia and inadequate intake in older adults.
Rhodus, N L; Brown, J
1990-12-01
Recent studies indicate that nearly one in five older adults has xerostomia (dry mouth). Salivary gland dysfunction and/or inadequate saliva increases the difficulty of these older adults in obtaining proper nutrition. Problems in lubricating, masticating, tolerating, tasting, and swallowing food contribute notably to the complex physiological and psychological manifestations of aging. To our knowledge, the literature has not demonstrated an association between xerostomia and malnutrition in the elderly. We randomly selected 67 older adults from institutionalized and free-living geriatric populations. Nutritional intake analysis was performed on both groups of study subjects, who were found to have xerostomia by use of sialometry, and on control subjects matched for age, sex, and physical status. Intake of total energy, protein, dietary fiber, total fat, saturated fat, cholesterol, sodium, potassium, vitamin A, vitamin C, thiamin, riboflavin, vitamin B-6, calcium, iron, and zinc was compared with the 1989 Recommended Dietary Allowances. Subjects' intakes were also compared with that of a control group. Medical systemic information and number and types of medications were compared among the groups. Statistical analysis of the data indicated significant (p less than .001) inadequacies in the nutritional intake patterns of institutionalized and free-living older adults with xerostomia. Subjects with xerostomia (more than 75% of the free-living and institutionalized seniors) had significant deficiencies of fiber, potassium, vitamin B-6, iron, calcium, and zinc. Taste and food perception were significantly reduced in the elders with xerostomia. Our study indicates the potential contribution of xerostomia to the high prevalence of geriatric malnutrition in the United States.
Inadequate pain relief among patients with primary knee osteoarthritis.
Laires, Pedro A; Laíns, Jorge; Miranda, Luís C; Cernadas, Rui; Rajagopalan, Srini; Taylor, Stephanie D; Silva, José C
Despite the widespread treatments for osteoarthritis (OA), data on treatment patterns, adequacy of pain relief, and quality of life are limited. The prospective multinational Survey of Osteoarthritis Real World Therapies (SORT) was designed to investigate these aspects. To analyze the characteristics and the patient reported outcomes of the Portuguese dataset of SORT at the start of observation. Patients ≥50 years with primary knee OA who were receiving oral or topical analgesics were eligible. Patients were enrolled from seven healthcare centers in Portugal between January and December 2011. Pain and function were evaluated using the Brief Pain Inventory (BPI) and WOMAC. Quality of life was assessed using the 12-Item Short Form Health Survey (SF-12). Inadequate pain relief (IPR) was defined as a score >4/10 on item 5 of the BPI. Overall, 197 patients were analyzed. The median age was 67.0 years and 78.2% were female. Mean duration of knee OA was 6.2 years. IPR was reported by 51.3% of patients. Female gender (adjusted odds ratio - OR 2.15 [95%CI 1.1, 4.5]), diabetes (OR 3.1 [95%CI 1.3, 7.7]) and depression (OR 2.24 [95%CI 1.2, 4.3]) were associated with higher risk of IPR. Patients with IPR reported worst outcomes in all dimensions of WOMAC (p<0.001) and in all eight domains and summary components of SF-12 (p<0.001). Our findings indicate that improvements are needed in the management of pain in knee OA in order to achieve better outcomes in terms of pain relief, function and quality of life. Copyright © 2016 Elsevier Editora Ltda. All rights reserved.
Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J
2013-01-01
The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.
Mohseni, Naimeh; Bahram, Morteza
2018-03-01
Herein, a rapid, sensitive and selective approach for the colorimetric detection of dopamine (DA) was developed utilizing unmodified gold nanoparticles (AuNPs). This assay relied upon the size-dependent aggregation behavior of DA and three other structurally similar catecholamines (CAs), offering highly specific and accurate detection of DA. By means of this study, we attempted to overcome the tedious procedures of surface premodifications and achieve selectivity through tuning the particle size of AuNPs. DA could induce the aggregation of the AuNPs via hydrogen-bonding interactions, resulting in a color change from pink to blue which can be monitored by spectrophotometry or even the naked-eye. The proposed colorimetric probe works over the 0.1 to 4 μM DA concentration range, with a lower detection limit (LOD) of 22 nM, which is much lower than the therapeutic lowest abnormal concentrations of DA in urine (0.57 μM) and blood (16 μM) samples. Furthermore, the selectivity and potential applicability of the developed method in spiked actual biological (human plasma and urine) specimens were investigated, suggesting that the present assay could satisfy the requirements for clinical diagnostics and biosensors.
White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E
2018-05-01
Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.
Alizadeh, Taher; Shamkhali, Amir Naser
2016-01-15
A new chromatographic procedure, based upon chiral ligand-exchange principal, was developed for the resolution of salbutamol enantiomers. The separation was carried out on a C18 column. (l)-Alanine and Cu(2+) were applied as chiral resolving agent and complexing ion, respectively. The kind of copper salt had definitive effect on the enantioseparation. Density functional theory (DFT) was used to substantiate the effect of various anions, accompanying Cu(2+), on the formation of ternary complexes, assumed to be created during separation process. The DFT results showed that the anion kind had huge effect on the stability difference between two corresponding diastereomeric complexes and their chemical structures. It was shown that the extent of participation of the chiral selector in the ternary diastereomeric complexes formation was managed by the anion kind, affecting thus the enantioseparation efficiency of the developed method. Water/methanol (70:30) mixture containing (l)-alanine-Cu(2+) (4:1) was found to be the best mobile phase for salbutamol enantioseparation. In order to analyze sulbutamol enantiomers in plasma samples, racemic salbutamol was first extracted from the samples via nano-sized salbutamol-imprinted polymer and then enantioseparated by the developed method. Copyright © 2015 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Andre, F.; Cariou, R.; Antignac, J.P.; Le Bizec, B. [Ecole Nationale Veterinaire de Nantes (FR). Laboratoire d' Etudes des Residus et Contaminants dans les Aliments (LABERCA); Debrauwer, L.; Zalko, D. [Institut National de Recherches Agronomiques (INRA), 31-Toulouse (France). UMR 1089 Xenobiotiques
2004-09-15
The impact of brominated flame retardants on the environment and their potential risk for animal and human health is a present time concern for the scientific community. Numerous studies related to the detection of tetrabromobisphenol A (TBBP-A) and polybrominated diphenylethers (PBDEs) have been developed over the last few years; they were mainly based on GC-ECD, GC-NCI-MS or GC-EI-HRMS, and recently GC-EI-MS/MS. The sample treatment is usually derived from the analytical methods used for dioxins, but recently some authors proposed the utilisation of solid phase extraction (SPE) cartridges. In this study, a new analytical strategy is presented for the multi-residue analysis of TBBP-A and PBDEs from a unique reduced size sample. The main objective of this analytical development is to be applied for background exposure assessment of French population groups to brominated flame retardants, for which, to our knowledge, no data exist. A second objective is to provide an efficient analytical tool to study the transfer of these contaminants through the environment to living organisms, including degradation reactions and metabolic biotransformations.
Avoidable waste of research related to inadequate methods in clinical trials.
Yordanov, Youri; Dechartres, Agnes; Porcher, Raphaël; Boutron, Isabelle; Altman, Douglas G; Ravaud, Philippe
2015-03-24
To assess the waste of research related to inadequate methods in trials included in Cochrane reviews and to examine to what extent this waste could be avoided. A secondary objective was to perform a simulation study to re-estimate this avoidable waste if all trials were adequately reported. Methodological review and simulation study. Trials included in the meta-analysis of the primary outcome of Cochrane reviews published between April 2012 and March 2013. We collected the risk of bias assessment made by the review authors for each trial. For a random sample of 200 trials with at least one domain at high risk of bias, we re-assessed risk of bias and identified all related methodological problems. For each problem, possible adjustments were proposed that were then validated by an expert panel also evaluating their feasibility (easy or not) and cost. Avoidable waste was defined as trials with at least one domain at high risk of bias for which easy adjustments with no or minor cost could change all domains to low risk. In the simulation study, after extrapolating our re-assessment of risk of bias to all trials, we considered each domain rated as unclear risk of bias as missing data and used multiple imputations to determine whether they were at high or low risk. Of 1286 trials from 205 meta-analyses, 556 (43%) had at least one domain at high risk of bias. Among the sample of 200 of these trials, 142 were confirmed as high risk; in these, we identified 25 types of methodological problem. Adjustments were possible in 136 trials (96%). Easy adjustments with no or minor cost could be applied in 71 trials (50%), resulting in 17 trials (12%) changing to low risk for all domains. So the avoidable waste represented 12% (95% CI 7% to 18%) of trials with at least one domain at high risk. After correcting for incomplete reporting, avoidable waste due to inadequate methods was estimated at 42% (95% CI 36% to 49%). An important burden of wasted research is related to inadequate
Directory of Open Access Journals (Sweden)
Sakellariou Argiris
2012-10-01
Full Text Available Abstract Background A feature selection method in microarray gene expression data should be independent of platform, disease and dataset size. Our hypothesis is that among the statistically significant ranked genes in a gene list, there should be clusters of genes that share similar biological functions related to the investigated disease. Thus, instead of keeping N top ranked genes, it would be more appropriate to define and keep a number of gene cluster exemplars. Results We propose a hybrid FS method (mAP-KL, which combines multiple hypothesis testing and affinity propagation (AP-clustering algorithm along with the Krzanowski & Lai cluster quality index, to select a small yet informative subset of genes. We applied mAP-KL on real microarray data, as well as on simulated data, and compared its performance against 13 other feature selection approaches. Across a variety of diseases and number of samples, mAP-KL presents competitive classification results, particularly in neuromuscular diseases, where its overall AUC score was 0.91. Furthermore, mAP-KL generates concise yet biologically relevant and informative N-gene expression signatures, which can serve as a valuable tool for diagnostic and prognostic purposes, as well as a source of potential disease biomarkers in a broad range of diseases. Conclusions mAP-KL is a data-driven and classifier-independent hybrid feature selection method, which applies to any disease classification problem based on microarray data, regardless of the available samples. Combining multiple hypothesis testing and AP leads to subsets of genes, which classify unknown samples from both, small and large patient cohorts with high accuracy.
LaGasse, Linda L.; Wouldes, Trecia A.; Arria, Amelia M.; Wilcox, Tara; Derauf, Chris; Newman, Elana; Shah, Rizwan; Smith, Lynne M.; Neal, Charles R.; Huestis, Marilyn A.; DellaGrotta, Sheri; Lester, Barry M.
2013-01-01
This study compared patterns of prenatal care among mothers who used methamphetamine (MA) during pregnancy and non-using mothers in the US and New Zealand (NZ), and evaluated associations among maternal drug use, child protective services (CPS) referral, and inadequate prenatal care in both countries. The sample consisted of 182 mothers in the MA-Exposed and 196 in the Comparison groups in the US, and 107 mothers in the MA-Exposed and 112 in the Comparison groups in NZ. Positive toxicology results and/or maternal report of MA use during pregnancy were used to identify MA use. Information about sociodemographics, prenatal care and prenatal substance use was collected by maternal interview. MA-use during pregnancy is associated with lower socio-economic status, single marital status, and CPS referral in both NZ and the US. Compared to their non-using counterparts, MA-using mothers in the US had significantly higher rates of inadequate prenatal care. No association was found between inadequate care and MA-use in NZ. In the US, inadequate prenatal care was associated with CPS referral, but not in NZ. Referral to CPS for drug use only composed 40 % of all referrals in the US, but only 15 % of referrals in NZ. In our study population, prenatal MA-use and CPS referral eclipse maternal sociodemographics in explanatory power for inadequate prenatal care. The predominant effect of CPS referral in the US is especially interesting, and should encourage further research on whether the US policy of mandatory reporting discourages drug-using mothers from seeking antenatal care. PMID:22588827
Lieke, Kirsten I.; Kandler, Konrad; Emmel, Carmen; Ebert, Martin; Weinzierl, Bernadett; Schütz, Lothar; Petzold, Andreas; Weinbruch, Stephan
2010-05-01
The Saharan Mineral Dust Experiment (SAMUM) is dedicated to the understanding of the radiative effects of mineral dust. A field campaign was performed during the winter season in the region of Cape Verde Islands, where desert dust from the African continent, especially from the Sahel and Sahara regions mixes with aerosol from biomass burning (bush fires). Flights were conducted over the Atlantic ocean heading south, east and north, and above the Cape Verde islands to gain information about the spacial distribution and mixing state of this heterogeneous aerosol. Samples were collected with a micro intertial impaction system for each flightlevel on constant altitude. The size-resolved chemical composition was determined by single particle analysis with electron microsocopy and a coupled energy-dispersive X-ray detection. In a second step, selected particles were analysed using transmission electron microscopy and electron diffraction. The results reveal a vertical layer structure of biomass burning aerosol, dust layers and mixed layers. The chemical and mineralogial composition of aerosol of each layer was investigated. The dust layers contain high number abundances of silicate particles and silicate containing mixtures, whereby usually more than 90% of those mixtures contain sulfur. Soot and soot agglomerates are the dominating particle group in the biomass burning aerosol layers. K/S and K/Cl ratios give evidence that the biomass burning aerosol is aged. Soot particles were imaged by transmission electron microscopy in high resolution in order to investigate their morphology and structure. Particulate potassium sulfate or chloride could not be observed in mixture with soot, but it is found instead as separate particles to a small extent. Potassium contents are elevated for all biomass burning samples. Sulfate indices are high compared to other element indices for almost all flight samples, but a sulfate coating was not observed at high altitudes. Sulfate coatings
Two bite mark cases with inadequate scale references.
Bernstein, M L
1985-07-01
Most literature addressing comparisons between epidermal bite marks and the perpetrator's bite pattern mandates fastidious coordination between the size of the compared reproductions. While ideal, this is not possible in every case and inability to control this variable in selected cases may not necessarily invalidate the comparison. The first case involves a known perpetrator. All photographic measurements were recorded with acceptable techniques to discover a serious discrepancy in arch size. The second case was degraded by the absence of a ruler in a tangentially made photograph of a bite mark. In both cases, the weight of the conclusions were lessened by these problems but the impartial handling of the evidence and explanation of discrepancies offered credibility to the analyses. Both cases illustrate that a technical infraction in processing and recording bite marks, though serious, need not automatically preempt the analysis.
Gritti, Fabrice; Guiochon, Georges
2009-06-05
A general reduced HETP (height equivalent to a theoretical plate) equation is proposed that accounts for the mass transfer of a wide range of molecular weight compounds in monolithic columns. The detailed derivatization of each one of the individual and independent mass transfer contributions (longitudinal diffusion, eddy dispersion, film mass transfer resistance, and trans-skeleton mass transfer resistance) is discussed. The reduced HETPs of a series of small molecules (phenol, toluene, acenaphthene, and amylbenzene) and of a larger molecule, insulin, were measured on three research grade monolithic columns (M150, M225, M350) having different average pore size (approximately 150, 225, and 350 A, respectively) but the same dimension (100 mm x 4.6 mm). The first and second central moments of 2 muL samples were measured and corrected for the extra-column contributions. The h data were fitted to the new HETP equation in order to identify which contribution controls the band broadening in monolithic columns. The contribution of the B-term was found to be negligible compared to that of the A-term, even at very low reduced velocities (nu5), the C-term of the monolithic columns is controlled by film mass transfer resistance between the eluent circulating in the large throughpores and the eluent stagnant inside the thin porous skeleton. The experimental Sherwood number measured on the monolith columns increases from 0.05 to 0.22 while the adsorption energy increases by nearly 6 kJ/mol. Stronger adsorption leads to an increase in the value of the estimated film mass transfer coefficient when a first order film mass transfer rate is assumed (j proportional, variantk(f)DeltaC). The average pore size and the trans-skeleton mass transfer have no (<0.5%, small molecules) or little (<10%, insulin) effect on the overall C-term.
Perret, V.; Renaud, F.; Epinat, B.; Amram, P.; Bournaud, F.; Contini, T.; Teyssier, R.; Lambert, J.-C.
2014-02-01
Context. In Λ-CDM models, galaxies are thought to grow both through continuous cold gas accretion coming from the cosmic web and episodic merger events. The relative importance of these different mechanisms at different cosmic epochs is nevertheless not yet understood well. Aims: We aim to address questions related to galaxy mass assembly through major and minor wet merging processes in the redshift range 1 adaptive mesh-refinement code RAMSES, we build the Merging and Isolated high redshift Adaptive mesh refinement Galaxies (MIRAGE) sample. It is composed of 20 mergers and 3 isolated idealized disks simulations, which sample disk orientations and merger masses. Our simulations can reach a physical resolution of 7 parsecs, and include star formation, metal line cooling, metallicity advection, and a recent physically-motivated implementation of stellar feedback that encompasses OB-type stars radiative pressure, photo-ionization heating, and supernovae. Results: The star formation history of isolated disks shows a stochastic star formation rate, which proceeds from the complex behavior of the giant clumps. Our minor and major gas-rich merger simulations do not trigger starbursts, suggesting a saturation of the star formation due to the detailed accounting of stellar feedback processes in a turbulent and clumpy interstellar medium fed by substantial accretion from the circumgalactic medium. Our simulations are close to the normal regime of the disk-like star formation on a Schmidt-Kennicutt diagram. The mass-size relation and its rate of evolution in the redshift range 1 < z < 2 matches observations, suggesting that the inside-out growth mechanisms of the stellar disk do not necessarily require cold accretion. Appendix A is available in electronic form at http://www.aanda.org
Teixeira, Maria Cristina Triguero Veloz; Marino, Regina Luisa de Freitas; Carreiro, Luiz Renato Rodrigues
2015-01-01
Children and adolescents with ADHD present behaviors such as impulsiveness, inattention, and difficulties with personal organization that represent an overload for parents. Moreover, it also increases their level of stress and leads them to resort to inadequate educational strategies. The present study verifies associations between inadequate parenting practices and behavioral profiles of children and adolescents with ADHD. The sample was composed of 22 children with ADHD (age range 6-16 years) and their mothers. Spearman correlation analyses were made with the scores of Parenting Style Inventory (PSI) and Child Behavior Checklist for ages 6-18 (CBCL/6-18). Results indicate statistically significant associations between behavioral problems and the use of punishment practices and negligence. When assessing a child with ADHD, it is important to verify the predominant types of parenting practices that can influence both immediate interventions and the prognosis of the disorder.
Nomura, Shogo; Hirakawa, Akihiro; Hamada, Chikuma
2017-09-08
The selection of progression-free survival (PFS) or overall survival (OS) as the most suitable primary endpoint (PE) in oncology Phase 3 trials is currently under intense debate. Because of substantial limitations in the single use of PFS (or OS) as the PE, trial designs that include PFS and OS as co-primary endpoints are attracting increasing interest. In this article, we report on the formulation of determining the sample size for a trial that sequentially tests PFS and OS by treating them as co-PEs. Using a three-component model of OS, the proposed method overcomes the drawbacks of an existing method that requires unreasonable assumption of the exponential distribution for OS, although the hazard function is nonconstant because effective subsequent therapy has prolonged postprogression survival in recent oncology trials. Alternative estimation method of hazard ratio for OS under a three-component mode is also discussed by checking the appropriateness of assuming proportionality of hazards for OS. In order to examine the performance of our proposed method, we performed three numerical studies using both simulated and actual data of cancer Phase 3 trials. We find that the proposed method preserves a prespecified target value of power with a feasible increment of trial scale.
Directory of Open Access Journals (Sweden)
Carina Maciel Silva-Boghossian
2014-05-01
Full Text Available The present study investigated the effect of non-surgical periodontal treatment (SRP on the composition of the subgingival microbiota of chronic periodontitis (CP in individuals with type 2 diabetes (DM2 with inadequate metabolic control and in systemically healthy (SH individuals. Forty individuals (20 DM2 and 20 SH with CP underwent full-mouth periodontal examination. Subgingival plaque was sampled from 4 deep sites of each individual and tested for mean prevalence and counts of 45 bacterial taxa by the checkerboard method. Clinical and microbiological assessments were performed before and 3 months after SRP. At baseline, those in the DM2 group presented a significantly higher percentage of sites with visible plaque and bleeding on probing compared with those in the SH group (p < 0.01. Those in the DM2 group presented significantly higher levels of C. rectus and P. gingivalis, and lower prevalence of P. micra and S. anginosus, compared with those in the SH group (p ≤ 0.001. At the 3-month visit, both groups showed a significant improvement in all clinical parameters (p < 0.01. Those in the DM2 group showed significantly higher prevalence and/or levels of A. gerencseriae, A. naeslundii I, A. oris, A. odontolyticus, C. sputigena, F. periodonticum, and G. morbillorum compared with those in the SH group (p ≤ 0.001. However, those in the DM2 group showed a significant reduction in the levels of P. intermedia, P. gingivalis, T. forsythia, and T. denticola (p ≤ 0.001 over time. Those in the SRP group showed improved periodontal status and reduced levels of putative periodontal pathogens at 3 months’ evaluation compared with those in the DM2 group with inadequate metabolic control.
Inadequate humidification of respiratory gases during mechanical ventilation of the newborn.
Tarnow-Mordi, W O; Sutton, P; Wilkinson, A R
1986-01-01
Proximal airway humidity was measured during mechanical ventilation in 14 infants using an electronic hygrometer. Values below recommended minimum humidity of adult inspired gas were recorded on 251 of 396 occasions. Inadequate humidification, largely due to inadequate proximal airway temperature, is commoner than recognised in infants receiving mechanical ventilation. PMID:3740912
Inadequate humidification of respiratory gases during mechanical ventilation of the newborn.
Tarnow-Mordi, W O; Sutton, P; Wilkinson, A R
1986-01-01
Proximal airway humidity was measured during mechanical ventilation in 14 infants using an electronic hygrometer. Values below recommended minimum humidity of adult inspired gas were recorded on 251 of 396 occasions. Inadequate humidification, largely due to inadequate proximal airway temperature, is commoner than recognised in infants receiving mechanical ventilation.
Gupta, Manan; Joshi, Amitabh; Vidya, T. N. C.
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More impo...
Directory of Open Access Journals (Sweden)
ALFREDO RIBEIRO DE FREITAS
1999-11-01
Full Text Available O objetivo deste trabalho foi estimar o tamanho amostral mínimo (n para comparar tratamentos em experimentos de consumo e de digestibilidade com bovinos, envolvendo múltiplos caracteres. Foram utilizados dados de digestibilidade de 72 novilhas com média de 18 meses de idade e 250 kg de peso. O experimento foi realizado na Embrapa-Centro de Pesquisa de Pecuária do Sudeste, São Carlos, SP, de 1988 a 1989, em delineamento inteiramente casualizado, com nove tratamentos organizados em esquema fatorial 3 x 3 (três grupos genéticos: Canchim, ½ Canchim + ½ Nelore e Nelore, e três níveis de proteína bruta: 6, 10 e 13%, com oito repetições cada, sendo a unidade experimental a novilha. Foram analisados consumo de ração (g/kg0,75 por quilograma de peso metabólico, energia digestível, nitrogênio retido (NR, NR (mg/kg0,75 e digestibilidades da matéria seca, proteína bruta, fibra em detergente neutro e fibra em detergente ácido. O valor mínimo de n, que permite detectar diferenças significativas (delta entre vetores de médias de tratamentos, foi obtido por meio de um programa SAS (Statistical Analysis System, considerando modelo de distribuição normal t-variada, média zero e matriz de covariância sigma, estatística T² de Hotelling, distribuição F com parâmetro de não centralidade (d²D, erros do tipo I (alfa, poder do teste (1 beta e delta. O valor de n variou de 6 a 47, sendo mais influenciado por alteração nos valores de delta, do que nos valores de alfa e poder do teste.The objective of this work was to estimate minimum sample size (n for comparison of treatments in experiments of consumption and digestibility of cattle in a multitrait analysis. Data from digestibility of 72 heifers aging approximately 18 months and with 250 kg of live weight were used. A completely randomized experiment was conducted at Embrapa-Centro de Pesquisa de Pecuária do Sudeste, São Carlos, SP, Brazil, from 1988 to 1989, considering the
Inadequate access to surgeons: reason for disparate cancer care?
Bradley, Cathy J; Dahman, Bassam; Given, Charles W
2009-07-01
To compare the likelihood of seeing a surgeon between elderly dually eligible non-small-cell lung cancer (NSCLC) and colon cancer patients and their Medicare counterparts. Surgery rates between dually eligible and Medicare patients who were evaluated by a surgeon were also assessed. We used statewide Medicaid and Medicare data merged with the Michigan Tumor Registry to extract a sample of patients with a first primary NSCLC (n = 1100) or colon cancer (n = 2086). The study period was from January 1, 1997 to December 31, 2000. We assessed the likelihood of a surgical evaluation using logistic models that included patient characteristics, tumor stage, and census tracts. Among patients evaluated by a surgeon, we used logistic regression to predict if a resection was performed. Dually eligible patients were nearly half as likely to be evaluated by a surgeon as Medicare patients (odds ratio [OR] = 0.49; 95% confidence interval = 0.32, 0.77 and odds ratio = 0.59; 95% confidence interval = 0.41, 0.86 for NSCLC and colon cancer patients, respectively). Among patients who were evaluated by a surgeon, the likelihood of resection was not statistically significantly different between dually eligible and Medicare patients. This study suggests that dually eligible patients, in spite of having Medicaid insurance, are less likely to be evaluated by a surgeon relative to their Medicare counterparts. Policies and interventions aimed toward increasing access to specialists and complete diagnostic work-ups (eg, colonoscopy, bronchoscopy) are needed.
The significance of inadequate transcranial Doppler studies in children with sickle cell disease.
Directory of Open Access Journals (Sweden)
Simon Greenwood
Full Text Available Sickle cell disease (SCD is a common cause of cerebrovascular disease in childhood. Primary stroke prevention is effective using transcranial Doppler (TCD scans to measure intracranial blood velocities, and regular blood transfusions or hydroxycarbamide when these are abnormal. Inadequate TCD scans occur when it is not possible to measure velocities in all the main arteries. We have investigated the prevalence and significance of this in a retrospective audit of 3915 TCD scans in 1191 children, performed between 2008 and 2015. 79% scans were normal, 6.4% conditional, 2.8% abnormal and 12% inadequate. 21.6% of 1191 patients had an inadequate scan at least once. The median age of first inadequate scan was 3.3 years (0.7-19.4, with a U-shaped frequency distribution with age: 28% aged 2-3 years, 3.5% age 10 years, 25% age 16 years. In young children reduced compliance was the main reason for inadequate TCDs, whereas in older children it was due to a poor temporal ultrasound window. The prevalence of inadequate TCD was 8% in the main Vascular Laboratory at King's College Hospital and significantly higher at 16% in the outreach clinics (P<0.0001, probably due to the use of a portable ultrasound machine. Inadequate TCD scans were not associated with underlying cerebrovascular disease.
Yadlapati, Rena; Johnston, Elyse R; Gregory, Dyanna L; Ciolino, Jody D; Cooper, Andrew; Keswani, Rajesh N
2015-11-01
Adequate bowel preparation is essential to safe and effective inpatient colonoscopy. Predictors of poor inpatient colonoscopy preparation and the economic impacts of inadequate inpatient preparations are not defined. The aims of this study were to (1) determine risk factors for inadequate inpatient bowel preparations, and (2) examine the association between inadequate inpatient bowel preparation and hospital length of stay (LOS) and costs. We performed a retrospective cohort study of adult patients undergoing inpatient colonoscopy preparation over 12 months (1/1/2013-12/31/2013). Of 524 identified patients, 22.3% had an inadequate preparation. A multiple logistic regression model identified the following potential predictors of inadequate bowel preparation: lower income (OR 1.11; 95% CI 1.04, 1.22), opiate or tricyclic antidepressant (TCA) use (OR 1.55; 0.98, 2.46), and afternoon colonoscopy (OR 1.66; 1.07, 2.59); as well as American Society of Anesthesiologists (ASA) class ≥3 (OR 1.15; 1.05, 1.25) and symptoms of nausea/vomiting (OR 1.14; 1.04, 1.25) when a fair preparation was considered inadequate. Inadequate bowel preparation was associated with significantly increased hospital LOS (model relative mean estimate 1.25; 95% CI 1.03, 1.51) and hospital costs (estimate 1.31; 1.03, 1.67) when compared to adequate preparations. The rate of inadequate inpatient bowel preparations is high and associated with a significant increase in hospital LOS and costs. We identified five potential predictors of inadequate inpatient preparation: lower socioeconomic class, opiate/TCA use, afternoon colonoscopies, ASA class ≥3, and pre-preparation nausea/vomiting; these data should guide future initiatives to improve the quality of inpatient bowel preparations.
Cahan, Sorel
2010-01-01
Fiedler and Kareev (2006) showed that small samples can, in principle, outperform large samples in terms of the quality of contingency-based binary choice. The 1st part of this comment critically examines these authors' claim that this small sample advantage (SSA) contradicts Bernoulli's law of large numbers and concludes that this claim is…
Loh, Ern; Armstrong, April W; Fung, Maxwell A
2014-11-01
Evaluation of a potential immunobullous disorder typically requires two pieces of tissue obtained by skin biopsy: one placed in formalin for conventional microscopy and a second placed in a different transport medium suitable for direct immunofluorescence (DIF) testing. Clinical practice in this area is not standardized, with dermatologists either obtaining two biopsies or dividing (pre-bisecting) a single biopsy. Some DIF specimens are technically inadequate for interpretation of subepidermal imunobullous disorders because the basement membrane zone is not intact, but it is unknown whether pre-bisecting the tissue increases the risk of compromising the specimen. To investigate whether technically inadequate DIF specimens are associated with pre-bisection. DIF specimens were consecutively sampled from a single referral center and identified as whole (non-bisected) biopsy specimens or pre-bisected biopsy specimens. The proportion of inadequate specimens was calculated for both groups. A total of 3450 specimens were included. The percentage of inadequate specimens was 5.072% (153/3016) for whole (non-bisected) specimens and 5.299% for pre-bisected specimens. This difference was not significant (chi square, p = 0.84). The study was sufficiently powered to detect a relative risk of 1.685. Pre-bisection of a single skin biopsy does not significantly increase the risk of a technically inadequate specimen for direct immunofluorescence testing. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DEFF Research Database (Denmark)
Kasperbauer, Tyler Joshua
2014-01-01
According to situationism in psychology, behavior is primarily influenced by external situational factors rather than internal traits or motivations such as virtues. Environmental ethicists wish to promote pro-environmental behaviors capable of providing adequate protection for the environment......, but situationist critiques suggest that character traits, and environmental virtues, are not as behaviorally robust as is typically supposed. Their views present a dilemma. Because ethicists cannot rely on virtues to produce pro-environmental behaviors, the only real way of salvaging environmental virtue theory...... positive results. However, because endorsing behaviorally ineffective virtues, for whatever reason, entails that environmental ethicists are abandoning the goal of helping and protecting the environment, environmental ethicists should consider looking elsewhere than virtues and focus instead on the role...
Particulate matter (PM) associated metals contribute to the adverse cardiopulmonary effects following exposure to air pollution. Here, we investigated how variation in the composition and size of ambient PM collected from two distinct regions in Mexico City relates to toxicity d...
A cotton ginning industry-supported project was initiated in 2008 and completed in 2013 to collect additional data for U.S. Environmental Protection Agency’s (EPA) Compilation of Air Pollution Emission Factors (AP-42) for PM10 and PM2.5. Stack emissions were collected using particle size distributio...
Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars
2016-10-01
The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative
DEFF Research Database (Denmark)
Burild, Anders; Frandsen, Henrik Lauritz; Poulsen, Morten
2014-01-01
Most methods for the quantification of physiological levels of vitamin D3 and 25‐hydroxyvitamin D3 are developed for food analysis where the sample size is not usually a critical parameter. In contrast, in life science studies sample sizes are often limited. A very sensitive liquid chromatography...... with tandem mass spectrometry method was developed to quantify vitamin D3 and 25‐hydroxyvitamin D3 simultaneously in porcine tissues. A sample of 0.2–1 g was saponified followed by liquid–liquid extraction and normal‐phase solid‐phase extraction. The analytes were derivatized with 4‐phenyl‐1,2,4‐triazoline‐3...
Directory of Open Access Journals (Sweden)
Jaime Alberto Sánchez-Cuén
2013-03-01
Full Text Available Introduction: PPIs have been an enormous therapeutic advance in acid-related diseases. However, it has been detected an abuse in its consumption. The aim of this study was to determine the frequency of inadequate prescription of chronic use of PPIs in outpatients in a speciality hospital. Material and methods: we performed a cross-sectional descriptive study review. The study population were patients, chronic users of proton pump inhibitors (PPIs, attending outpatient consult in a hospital of government workers. We defined as chronic user of PPIs that patient that takes medication daily for over a year and inappropriate prescription, that one that has not been approved by the clinical guidelines. A simple random sampling was utilized. The following parameters were investigated: diagnosis and prescription of PPIs, time of use, at which level of care PPIs were prescribed (primary care or specialist, self-medication, with or without endoscopy. For the statistical analysis, we used Student's t-test and Chi-square, 95% confidence intervals and significance 0.05 %. Results: we reviewed 153 patients, 40 (26.1 % men and 113 (73.9 % women, mean age 58 ± 11.4 years. The prescription of chronic treatment with PPIs was adequate in 64.7 % of patients and inadequate in 35.3 %. The most common appropriate prescription (31.3 % of chronic use of PPIs was due to gastroesophageal reflux disease. The most common inadequate prescription was absence of diagnosis (22.2 %, polypharmacy without nonsteroidal antiinflammatory drugs (16.6 % and chronic gastritis (16.6 %. History of endoscopy were not statistically significant. Conclusions: the frequency of inappropriate prescriptions of chronic use of PPIs was high, around 35.3 %, similar to those reported in hospitals in developed countries.
Estis-Deaton, Asia; Sheiner, Eyal; Wainstock, Tamar; Landau, Daniella; Walfisch, Asnat
2017-12-01
To evaluate the impact of inadequate prenatal care on long-term morbidity among the offspring of an ethnic minority population. A retrospective population-based cohort analysis was performed among all Bedouin women with singleton pregnancies who delivered in a tertiary medical center in Israel between January 1, 1991, and January 1, 2014. Morbidity was defined as pediatric hospitalization across six distinct disease categories before 18 years of age. The cumulative morbidity rates were compared for offspring born following pregnancies with either inadequate (prenatal care facility) or adequate prenatal care. Overall, 127 396 neonates were included; 19 173 (15.0%) were born following inadequate prenatal care. Pediatric hospitalizations for all morbidities other than cardiovascular ones were less frequent among the inadequate prenatal care group than the adequate prenatal care group (Pprenatal care group, with the exception of cardiovascular disease. Inadequate prenatal care correlated with reduced pediatric hospitalization rates among offspring, possibly owing to a lack of child healthcare service utilization within the Bedouin population. © 2017 International Federation of Gynecology and Obstetrics.
Simpson, Kathleen Rice; Lyndon, Audrey; Ruhl, Catherine
2016-01-01
To evaluate responses of registered nurse members of the Association of Women's Health, Obstetric and Neonatal Nurses (AWHONN) to a survey that sought their recommendations for staffing guidelines and their perceptions of the consequences of inadequate nurse staffing. The goal was to use these member data to inform the work of the AWHONN nurse staffing research team. Secondary analysis of responses to the 2010 AWHONN nurse staffing survey. Online. AWHONN members (N = 884). Review of data from an online survey of AWHONN members through the use of thematic analysis for descriptions of the consequences of inadequate nurse staffing during the childbirth process. Three main themes emerged as consequences of inadequate staffing or being short-staffed: Missed Care, Potential for Failure to Rescue, and Job-Related Stress and Dissatisfaction. These themes are consistent with those previously identified in the literature related to inadequate nurse staffing. Based on the responses from participants in the 2010 AWHONN nurse staffing survey, consequences of inadequate staffing can be quite serious and may put patients at risk for preventable harm. Copyright © 2016 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
Sluyter, John D; Scragg, Robert K R; Plank, Lindsay D; Waqa, Gade D; Fotu, Kalesita F; Swinburn, Boyd A
2013-10-12
The magnitude of the relationship between lifestyle risk factors for obesity and adiposity is not clear. The aim of this study was to clarify this in order to determine the level of importance of lifestyle factors in obesity aetiology. A cross-sectional analysis was carried out on data on youth who were not trying to change weight (n = 5714), aged 12 to 22 years and from 8 ethnic groups living in New Zealand, Australia, Fiji and Tonga. Demographic and lifestyle data were measured by questionnaires. Fatness was measured by body mass index (BMI), BMI z-score and bioimpedance analysis, which was used to estimate percent body fat and total fat mass (TFM). Associations between lifestyle and body composition variables were examined using linear regression and forest plots. TV watching was positively related to fatness in a dose-dependent manner. Strong, dose-dependent associations were observed between fatness and soft drink consumption (positive relationship), breakfast consumption (inverse relationship) and after-school physical activity (inverse relationship). Breakfast consumption-fatness associations varied in size across ethnic groups. Lifestyle risk factors for obesity were associated with percentage differences in body composition variables that were greatest for TFM and smallest for BMI. Lifestyle factors were most strongly related to TFM, which suggests that studies that use BMI alone to quantify fatness underestimate the full effect of lifestyle on adiposity. This study clarifies the size of lifestyle-fatness relationships observed in previous studies.
1980-08-01
o080Atalea Cal lolna 9" C’ lelephone 121 3) 79, .I 9" lee f6 5A2i A S.,&,osa’fr o MC CALIBRAT ION REPORT Date: 1/8/80 Instrument: ASSP-l 00-1 Size Size...c% c l a a 0 -1 a*’ ac e a***cca *a c a am a*ca c acca =aac ca~ac caaa U C C C C0 C C C C C C C C Q C C C C C C C> C C C C C C C CD C C C C CD 0 a C...C C atI 0 l C 0 .C C 0 0 QC 0 0 0 c CD C c 0 C C 0 Q 0 CIC C C0 C 0 C C> C go 1 C 0j C C C QC 0 QQO0COQC=C0c 00CC =1C Q0c cC aCCa CSQ --- -V -, -7
2013-01-01
Background The magnitude of the relationship between lifestyle risk factors for obesity and adiposity is not clear. The aim of this study was to clarify this in order to determine the level of importance of lifestyle factors in obesity aetiology. Methods A cross-sectional analysis was carried out on data on youth who were not trying to change weight (n = 5714), aged 12 to 22 years and from 8 ethnic groups living in New Zealand, Australia, Fiji and Tonga. Demographic and lifestyle data were measured by questionnaires. Fatness was measured by body mass index (BMI), BMI z-score and bioimpedance analysis, which was used to estimate percent body fat and total fat mass (TFM). Associations between lifestyle and body composition variables were examined using linear regression and forest plots. Results TV watching was positively related to fatness in a dose-dependent manner. Strong, dose-dependent associations were observed between fatness and soft drink consumption (positive relationship), breakfast consumption (inverse relationship) and after-school physical activity (inverse relationship). Breakfast consumption-fatness associations varied in size across ethnic groups. Lifestyle risk factors for obesity were associated with percentage differences in body composition variables that were greatest for TFM and smallest for BMI. Conclusions Lifestyle factors were most strongly related to TFM, which suggests that studies that use BMI alone to quantify fatness underestimate the full effect of lifestyle on adiposity. This study clarifies the size of lifestyle-fatness relationships observed in previous studies. PMID:24119635
Directory of Open Access Journals (Sweden)
David Normando
2011-12-01
method error in studies published in Brazil and in the United States of America. METHODS: Two major journals, according to CAPES (Brazilian Federal Agency for Support and Evaluation of Graduate Education, were analyzed through a hand search: Revista Dental Press de Ortodontia e Ortopedia Facial and the American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO. Only papers published between 2005 and 2008 were examined. RESULTS: Most of surveys published in both journals employed some method of error analysis, when this methodology can be applied. On the other hand, only a very small number of articles published in these journals have any description of how sample size was calculated. This proportion was 21.1% for the journal published in the United States (AJO-DO, and was significantly lower (p= 0.008 for the journal of orthodontics published in Brazil (3.9%. CONCLUSION: Researchers and the editorial board of both journals should drive greater concern for the examination of errors inherent in the absence of such analyses in scientific research, particularly the errors related to the use of an inadequate sample size.
Directory of Open Access Journals (Sweden)
Farrar Jeremy
2011-02-01
Full Text Available Abstract Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of
1988-01-01
REAGENTS TO BE ADDED TO COMBUSTION PRODUCTSS..SAMPLED INTO 1 LITER GLASS FLASK Analyte Reagent HCN 10 ml, 0.01N, NaOH NH3 10 ml, 0.01N, H2S04 NOx + Sox 1G...positive or negative errors in the meusurement of trace elements by (a) contributing contaminants through leaching or surface desorption and (b) by
Skull-base Osteomyelitis: a Dreaded Complication after Trivial Fall and Inadequate Management
Directory of Open Access Journals (Sweden)
Kundan Mittal
2015-10-01
Full Text Available Introduction: Skull-based osteomyelitis is bony infection which generally originates from inadequately treated chronic infection, adjoining tissue infection or after trauma.Case: 11 month female child had a trivial fall while standing near a bucket. The child developed fracture of right clavicle and left orbital swelling which was inadequately treated. This resulted in in spread of infection to adjoining tissues, skull bones, sinuses and brain.Conclusion: Cranial base osteomyelitis is rare but dreaded condition which requires early diagnosis and prompt treatment to avoid mortality and morbidity in form of neurological deficits and permanent disability
Directory of Open Access Journals (Sweden)
Peter B. Gray
2012-07-01
Full Text Available We investigated body image in St. Kitts, a Caribbean island where tourism, international media, and relatively high levels of body fat are common. Participants were men and women recruited from St. Kitts (n = 39 and, for comparison, U.S. samples from universities (n = 618 and the Internet (n = 438. Participants were shown computer generated images varying in apparent body fat level and muscularity or breast size and they indicated their body type preferences and attitudes. Overall, there were only modest differences in body type preferences between St. Kitts and the Internet sample, with the St. Kitts participants being somewhat more likely to value heavier women. Notably, however, men and women from St. Kitts were more likely to idealize smaller breasts than participants in the U.S. samples. Attitudes regarding muscularity were generally similar across samples. This study provides one of the few investigations of body preferences in the Caribbean.
Energy Technology Data Exchange (ETDEWEB)
Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A., E-mail: amandagmgalante@gmail.com [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil). Departamento de Fisica e Quimica; Spada, E.R. [Universidade de Sao Paulo (USP), Ilha Solteira, SP (Brazil). Instituto de Fisica
2016-07-01
Titanium dioxide (TiO{sub 2}) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO{sub 2} powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)
Behrooz, Reza Dahmardeh; Esmaili-Sari, Abbas; Bahramifar, Nader; Kaskaoutis, D. G.; Saeb, Keivan; Rajaei, Fatemeh
2017-04-01
This study analyzes the chemical composition (water-soluble ions and trace elements) of the total suspended particles (TSP) and particulate matter less than 10 and 2.5 μm (PM10 and PM2.5) in the Sistan basin, southeast Iran during the dusty and windy period June - October 2014. Extreme TSP, PM10 and PM2.5 concentrations, means of 1624.8, 433.4 and 320.8 μgm-3, respectively, were recorded in the Zabol sampling site, while the examined water-soluble ions and trace metals constitute small fractions (∼4.1%-17.7%) of the particulate masses. Intense winds on the dust-storm days result in weathering of soil crust and deflation of evaporate minerals from the dried Hamoun lake beds in the Sistan basin. The soil samples are rich in Ca2+, SO42-, Na+ and Cl- revealing the existence of non-sea salts, as well as in Al, Fe and Mg, while the similarity in the chemical composition between soil and airborne samples indicates that the dust events over Sistan are local in origin. In contrast, low concentrations of secondary ions (i.e., nitrate) and heavy metals (i.e., Pb, Cr, Ni, Cu) indicate less anthropogenic and industrial emissions. Enrichment Factor analysis for TSP, PM10 and PM2.5 reveals that the anthropogenic sources contribute a substantial amount in the heavy metals rather than soil crust, while Al, Fe, Sn, Mg are mostly of crustal origin. The results provide essential knowledge in atmospheric chemistry over Sistan and in establishing mitigation strategies for air pollution control.
... with hummus. To control your portion sizes when eating out, try these tips: Order the small size. Instead of a medium or large, ask for the smallest size. By eating a small hamburger instead of a large, you ...
Echavarría-Heras, Héctor; Leal-Ramírez, Cecilia; Villa-Diharce, Enrique; Cazarez-Castro, Nohe
2018-03-06
The effects of current anthropogenic influences on eelgrass (Zostera marina) meadows are noticeable. Eelgrass ecological services grant important benefits for mankind. Preservation of eelgrass meadows include several transplantation methods. Evaluation of establishing success relies on the estimation of standing stock and productivity. Average leaf biomass in shoots is a fundamental component of standing stock. Existing methods of leaf biomass measurement are destructive and time consuming. These assessments could alter shoot density in developing transplants. Allometric methods offer convenient indirect assessments of individual leaf biomass. Aggregation of single leaf projections produce surrogates for average leaf biomass in shoots. Involved parameters are time invariant, then derived proxies yield simplified nondestructive approximations. On spite of time invariance local factors induce relative variability of parameter estimates. This influences accuracy of surrogates. And factors like analysis method, sample size and data quality also impact precision. Besides, scaling projections are sensitive to parameter fluctuation. Thus the suitability of the addressed allometric approximations requires clarification. The considered proxies produced accurate indirect assessments of observed values. Only parameter estimates fitted from raw data using nonlinear regression, produced robust approximations. Data quality influenced sensitivity and sample size for an optimal precision. Allometric surrogates of average leaf biomass in eelgrass shoots offer convenient nondestructive assessments. But analysis method and sample size can influence accuracy in a direct manner. Standardized routines for data quality are crucial on granting cost-effectiveness of the method.
Higgins, P; Murray, M L; Williams, E M
1994-03-01
This descriptive, retrospective study examined levels of self-esteem, social support, and satisfaction with prenatal care in 193 low-risk postpartal women who obtained adequate and inadequate care. The participants were drawn from a regional medical center and university teaching hospital in New Mexico. A demographic questionnaire, the Coopersmith self-esteem inventory, the personal resource questionnaire part 2, and the prenatal care satisfaction inventory were used for data collection. Significant differences were found in the level of education, income, insurance, and ethnicity between women