Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Donkers, Hanneke; Graff, Maud; Vernooij-Dassen, Myrra; Nijhuis-van der Sanden, Maria; Teerenstra, Steven
2017-01-01
In randomized controlled trials, two endpoints may be necessary to capture the multidimensional concept of the intervention and the objectives of the study adequately. We show how to calculate sample size when defining success of a trial by combinations of superiority and/or non-inferiority aims for the endpoints. The randomized controlled trial design of the Social Fitness study uses two primary endpoints, which can be combined into five different scenarios for defining success of the trial. We show how to calculate power and sample size for each scenario and compare these for different settings of power of each endpoint and correlation between them. Compared to a single primary endpoint, using two primary endpoints often gives more power when success is defined as: improvement in one of the two endpoints and no deterioration in the other. This also gives better power than when success is defined as: improvement in one prespecified endpoint and no deterioration in the remaining endpoint. When two primary endpoints are equally important, but a positive effect in both simultaneously is not per se required, the objective of having one superior and the other (at least) non-inferior could make sense and reduce sample size. Copyright © 2016 Elsevier Inc. All rights reserved.
Reducing costs by reducing size
International Nuclear Information System (INIS)
Hayns, M.R.; Shepherd, J.
1991-01-01
The present paper discusses briefly the many factors, including capital cost, which have to be taken into account in determining whether a series of power stations based on a small nuclear plant can be competitive with a series based on traditional large unit sizes giving the guaranteed level of supply. The 320 MWe UK/US Safe Integral Reactor is described as a good example of how the factors discussed can be beneficially incorporated into a design using proven technology. Finally it goes on to illustrate how the overall costs of a generating system can indeed by reduced by use of the 320 MWe Safe Integral Reactor rather than conventional units of around 1200 MWe. (author). 9 figs
DEFF Research Database (Denmark)
Aukland, S M; Westerhausen, R; Plessen, K J
2011-01-01
BACKGROUND AND PURPOSE: Several studies suggest that VLBW is associated with a reduced CC size later in life. We aimed to clarify this in a prospective, controlled study of 19-year-olds, hypothesizing that those with LBWs had smaller subregions of CC than the age-matched controls, even after...... correcting for brain volume. MATERIALS AND METHODS: One hundred thirteen survivors of LBW (BW brain. The cross-sectional area of the CC (total callosal area, and the callosal subregions of the genu, truncus......, and posterior third) was measured. Callosal areas were adjusted for head size. RESULTS: The posterior third subregion of the CC was significantly smaller in individuals born with a LBW compared with controls, even after adjusting for size of the forebrain. Individuals who were born with a LBW had a smaller CC...
Directory of Open Access Journals (Sweden)
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
Sample size determination and power
Ryan, Thomas P, Jr
2013-01-01
THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.
The large sample size fallacy.
Lantz, Björn
2013-06-01
Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.
Concepts in sample size determination
Directory of Open Access Journals (Sweden)
Umadevi K Rao
2012-01-01
Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.
How Sample Size Affects a Sampling Distribution
Mulekar, Madhuri S.; Siegel, Murray H.
2009-01-01
If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…
Sample size in qualitative interview studies
DEFF Research Database (Denmark)
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane
2016-01-01
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....
Eichmann, Cordula; Parson, Walther
2008-09-01
The traditional protocol for forensic mitochondrial DNA (mtDNA) analyses involves the amplification and sequencing of the two hypervariable segments HVS-I and HVS-II of the mtDNA control region. The primers usually span fragment sizes of 300-400 bp each region, which may result in weak or failed amplification in highly degraded samples. Here we introduce an improved and more stable approach using shortened amplicons in the fragment range between 144 and 237 bp. Ten such amplicons were required to produce overlapping fragments that cover the entire human mtDNA control region. These were co-amplified in two multiplex polymerase chain reactions and sequenced with the individual amplification primers. The primers were carefully selected to minimize binding on homoplasic and haplogroup-specific sites that would otherwise result in loss of amplification due to mis-priming. The multiplexes have successfully been applied to ancient and forensic samples such as bones and teeth that showed a high degree of degradation.
Choosing a suitable sample size in descriptive sampling
International Nuclear Information System (INIS)
Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon
2010-01-01
Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency
Decision Support on Small size Passive Samples
Directory of Open Access Journals (Sweden)
Vladimir Popukaylo
2018-05-01
Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.
Experimental determination of size distributions: analyzing proper sample sizes
International Nuclear Information System (INIS)
Buffo, A; Alopaeus, V
2016-01-01
The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)
Estimating Sample Size for Usability Testing
Directory of Open Access Journals (Sweden)
Alex Cazañas
2017-02-01
Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.
Sample size calculation in metabolic phenotyping studies.
Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J
2015-09-01
The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Improved sample size determination for attributes and variables sampling
International Nuclear Information System (INIS)
Stirpe, D.; Picard, R.R.
1985-01-01
Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs
Predicting sample size required for classification performance
Directory of Open Access Journals (Sweden)
Figueroa Rosa L
2012-02-01
Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Sample size for morphological traits of pigeonpea
Directory of Open Access Journals (Sweden)
Giovani Facco
2015-12-01
Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons.
Sample size allocation in multiregional equivalence studies.
Liao, Jason J Z; Yu, Ziji; Li, Yulan
2018-06-17
With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
SIR (Safe Integral Reactor) - reducing size can reduce cost
International Nuclear Information System (INIS)
Hayns, M.R.
1991-01-01
Traditional engineering economics have favoured the advantages of larger size as a means of reducing specific capital costs and hence unit generating costs. For large and small plants utilising the same concept, e.g. a small four-loop PWR vs a large four-loop PWR with the same number of components, economies of scale are well established. If, however, a smaller plant is sized to take advantage of features which are only feasible at smaller outputs, is of simpler design, with the advantage taken of the simplified design to produce the most cost-effective layout, and incorporates fewer, more easily replaceable components with minimal assembly on site, it is possible to produce a plant which is competitive with larger plant of more traditional design. When 'system' effects, such as better matching of installed capacity to the growth in demand and the fact that a smaller total capacity will be needed to meet a given demand with a specified level of confidence, are taken into account, it can be shown that a utility's overall cash-flow position can be improved with lower associated absolute financial risks. The UK/US Safe Integral Reactor (SIR) is an integral pressurized water reactor in the 300-400 MW(e) range which utilises conventional water reactor technology in a way not feasible at the very large, sizes of recent years. The SIR concept is briefly explained and its technical and economic advantages in terms of simplicity, construction, maintenance, availability, decommissioning, safety and siting described. The results of system analyses which demonstrate the overall financial advantages to a utility are presented. (author)
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
Optimal sample size for probability of detection curves
International Nuclear Information System (INIS)
Annis, Charles; Gandossi, Luca; Martin, Oliver
2013-01-01
Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not
Preeminence and prerequisites of sample size calculations in clinical trials
Richa Singhal; Rakesh Rana
2015-01-01
The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...
Decision-making and sampling size effect
Ismariah Ahmad; Rohana Abd Rahman; Roda Jean-Marc; Lim Hin Fui; Mohd Parid Mamat
2010-01-01
Sound decision-making requires quality information. Poor information does not help in decision making. Among the sources of low quality information, an important cause is inadequate and inappropriate sampling. In this paper we illustrate the case of information collected on timber prices.
Analysis of reduced widths and size
International Nuclear Information System (INIS)
Sharma, H.C.; Ram Raj; Nath, N.
1977-01-01
Recent data on S-wave neutron reduced widths for a large number of nuclei have been analysed nucleus-wise and the calculations for the degree of freedom of the associated (chi) 2 -distribution have been made using the Porter and Thomas procedure. It is noted that a number of nuclei can be fitted by a (chi) 2 -distribution with degree of freedom one, while there are few which are identified to follow a (chi) 2 -distribution with degree of freedom two and even more than two. The present analysis thus contradicts the usual presumption according to which the degree of freedom is taken to be always unity. An analytical attempt has also been made to ascertain the suitability of the data on reduced widths to be used for the analysis. These considerations are likely to modify the neutron cross-section evaluations. (author)
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.
Sample size determination in clinical trials with multiple endpoints
Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R
2015-01-01
This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...
Optimal Sample Size for Probability of Detection Curves
International Nuclear Information System (INIS)
Annis, Charles; Gandossi, Luca; Martin, Oliver
2012-01-01
The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Preeminence and prerequisites of sample size calculations in clinical trials
Directory of Open Access Journals (Sweden)
Richa Singhal
2015-01-01
Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Effect of sample size on bias correction performance
Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.
2014-05-01
The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
40 CFR 80.127 - Sample size guidelines.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...
Sample Size in Qualitative Interview Studies: Guided by Information Power.
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit
2015-11-27
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Sample sizes and model comparison metrics for species distribution models
B.B. Hanberry; H.S. He; D.C. Dey
2012-01-01
Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....
Estimation of sample size and testing power (part 5).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-02-01
Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Neuromuscular dose-response studies: determining sample size.
Kopman, A F; Lien, C A; Naguib, M
2011-02-01
Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.
Sample size optimization in nuclear material control. 1
International Nuclear Information System (INIS)
Gladitz, J.
1982-01-01
Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.
[Practical aspects regarding sample size in clinical research].
Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S
1996-01-01
The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...
Sample size computation for association studies using case–parents ...
Indian Academy of Sciences (India)
ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...
Sample size determination for equivalence assessment with multiple endpoints.
Sun, Anna; Dong, Xiaoyu; Tsong, Yi
2014-01-01
Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.
A flexible method for multi-level sample size determination
International Nuclear Information System (INIS)
Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.
1997-01-01
This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness
Directory of Open Access Journals (Sweden)
R. Eric Heidel
2016-01-01
Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
Revisiting sample size: are big trials the answer?
Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J
2012-07-18
The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.
Sample size in psychological research over the past 30 years.
Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B
2011-04-01
The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.
Impact of shoe size in a sample of elderly individuals
Directory of Open Access Journals (Sweden)
Daniel López-López
Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.
Selection dramatically reduces effective population size in HIV-1 infection
Directory of Open Access Journals (Sweden)
Mittler John E
2008-05-01
Full Text Available Abstract Background In HIV-1 evolution, a 100–100,000 fold discrepancy between census size and effective population size (Ne has been noted. Although it is well known that selection can reduce Ne, high in vivo mutation and recombination rates complicate attempts to quantify the effects of selection on HIV-1 effective size. Results We use the inbreeding coefficient and the variance in allele frequency at a linked neutral locus to estimate the reduction in Ne due to selection in the presence of mutation and recombination. With biologically realistic mutation rates, the reduction in Ne due to selection is determined by the strength of selection, i.e., the stronger the selection, the greater the reduction. However, the dependence of Ne on selection can break down if recombination rates are very high (e.g., r ≥ 0.1. With biologically likely recombination rates, our model suggests that recurrent selective sweeps similar to those observed in vivo can reduce within-host HIV-1 effective population sizes by a factor of 300 or more. Conclusion Although other factors, such as unequal viral reproduction rates and limited migration between tissue compartments contribute to reductions in Ne, our model suggests that recurrent selection plays a significant role in reducing HIV-1 effective population sizes in vivo.
Determining sample size for assessing species composition in ...
African Journals Online (AJOL)
Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...
Research Note Pilot survey to assess sample size for herbaceous ...
African Journals Online (AJOL)
A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...
Test of a sample container for shipment of small size plutonium samples with PAT-2
International Nuclear Information System (INIS)
Kuhn, E.; Aigner, H.; Deron, S.
1981-11-01
A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)
Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B
2018-06-01
The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.
Rock sampling. [method for controlling particle size distribution
Blum, P. (Inventor)
1971-01-01
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
Development of sample size allocation program using hypergeometric distribution
International Nuclear Information System (INIS)
Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik
1996-01-01
The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)
Estimation of sample size and testing power (Part 3).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2011-12-01
This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.
Sample Size Calculation for Controlling False Discovery Proportion
Directory of Open Access Journals (Sweden)
Shulian Shang
2012-01-01
Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.
An integrated approach for multi-level sample size determination
International Nuclear Information System (INIS)
Lu, M.S.; Teichmann, T.; Sanborn, J.B.
1997-01-01
Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization
Estimation of sample size and testing power (part 6).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-03-01
The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).
Simple and multiple linear regression: sample size considerations.
Hanley, James A
2016-11-01
The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-05-01
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Sample size for monitoring sirex populations and their natural enemies
Directory of Open Access Journals (Sweden)
Susete do Rocio Chiarello Penteado
2016-09-01
Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...
Red maca (Lepidium meyenii reduced prostate size in rats
Directory of Open Access Journals (Sweden)
Rubio Julio
2005-01-01
Full Text Available Abstract Background Epidemiological studies have found that consumption of cruciferous vegetables is associated with a reduced risk of prostate cancer. This effect seems to be due to aromatic glucosinolate content. Glucosinolates are known for have both antiproliferative and proapoptotic actions. Maca is a cruciferous cultivated in the highlands of Peru. The absolute content of glucosinolates in Maca hypocotyls is relatively higher than that reported in other cruciferous crops. Therefore, Maca may have proapoptotic and anti-proliferative effects in the prostate. Methods Male rats treated with or without aqueous extracts of three ecotypes of Maca (Yellow, Black and Red were analyzed to determine the effect on ventral prostate weight, epithelial height and duct luminal area. Effects on serum testosterone (T and estradiol (E2 levels were also assessed. Besides, the effect of Red Maca on prostate was analyzed in rats treated with testosterone enanthate (TE. Results Red Maca but neither Yellow nor Black Maca reduced significantly ventral prostate size in rats. Serum T or E2 levels were not affected by any of the ecotypes of Maca assessed. Red Maca also prevented the prostate weight increase induced by TE treatment. Red Maca administered for 42 days reduced ventral prostatic epithelial height. TE increased ventral prostatic epithelial height and duct luminal area. These increases by TE were reduced after treatment with Red Maca for 42 days. Histology pictures in rats treated with Red Maca plus TE were similar to controls. Phytochemical screening showed that aqueous extract of Red Maca has alkaloids, steroids, tannins, saponins, and cardiotonic glycosides. The IR spectra of the three ecotypes of Maca in 3800-650 cm (-1 region had 7 peaks representing 7 functional chemical groups. Highest peak values were observed for Red Maca, intermediate values for Yellow Maca and low values for Black Maca. These functional groups correspond among others to benzyl
Cannabidiol Reduces Leukemic Cell Size – But Is It Important?
Kalenderoglou, Nikoletta; Macpherson, Tara; Wright, Karen L.
2017-01-01
The anti-cancer effect of the plant-derived cannabinoid, cannabidiol, has been widely demonstrated both in vivo and in vitro. However, this body of preclinical work has not been translated into clinical use. Key issues around this failure can be related to narrow dose effects, the cell model used and incomplete efficacy. A model of acute lymphoblastic disease, the Jurkat T cell line, has been used extensively to study the cannabinoid system in the immune system and cannabinoid-induced apoptosis. Using these cells, this study sought to investigate the outcome of those remaining viable cells post-treatment with cannabidiol, both in terms of cell size and tracking any subsequent recovery. The phosphorylation status of the mammalian Target of Rapamycin (mTOR) signaling pathway and the downstream target ribosomal protein S6, were measured. The ability of cannabidiol to exert its effect on cell viability was also evaluated in physiological oxygen conditions. Cannabidiol reduced cell viability incompletely, and slowed the cell cycle with fewer cells in the G2/M phase of the cell cycle. Cannabidiol reduced phosphorylation of mTOR, PKB and S6 pathways related to survival and cell size. The remaining population of viable cells that were cultured in nutrient rich conditions post-treatment were able to proliferate, but did not recover to control cell numbers. However, the proportion of viable cells that were gated as small, increased in response to cannabidiol and normally sized cells decreased. This proportion of small cells persisted in the recovery period and did not return to basal levels. Finally, cells grown in 12% oxygen (physiological normoxia) were more resistant to cannabidiol. In conclusion, these results indicate that cannabidiol causes a reduction in cell size, which persists post-treatment. However, resistance to cannabidiol under physiological normoxia for these cells would imply that cannabidiol may not be useful in the clinic as an anti-leukemic agent. PMID
Cannabidiol Reduces Leukemic Cell Size - But Is It Important?
Kalenderoglou, Nikoletta; Macpherson, Tara; Wright, Karen L
2017-01-01
The anti-cancer effect of the plant-derived cannabinoid, cannabidiol, has been widely demonstrated both in vivo and in vitro . However, this body of preclinical work has not been translated into clinical use. Key issues around this failure can be related to narrow dose effects, the cell model used and incomplete efficacy. A model of acute lymphoblastic disease, the Jurkat T cell line, has been used extensively to study the cannabinoid system in the immune system and cannabinoid-induced apoptosis. Using these cells, this study sought to investigate the outcome of those remaining viable cells post-treatment with cannabidiol, both in terms of cell size and tracking any subsequent recovery. The phosphorylation status of the mammalian Target of Rapamycin (mTOR) signaling pathway and the downstream target ribosomal protein S6, were measured. The ability of cannabidiol to exert its effect on cell viability was also evaluated in physiological oxygen conditions. Cannabidiol reduced cell viability incompletely, and slowed the cell cycle with fewer cells in the G2/M phase of the cell cycle. Cannabidiol reduced phosphorylation of mTOR, PKB and S6 pathways related to survival and cell size. The remaining population of viable cells that were cultured in nutrient rich conditions post-treatment were able to proliferate, but did not recover to control cell numbers. However, the proportion of viable cells that were gated as small, increased in response to cannabidiol and normally sized cells decreased. This proportion of small cells persisted in the recovery period and did not return to basal levels. Finally, cells grown in 12% oxygen (physiological normoxia) were more resistant to cannabidiol. In conclusion, these results indicate that cannabidiol causes a reduction in cell size, which persists post-treatment. However, resistance to cannabidiol under physiological normoxia for these cells would imply that cannabidiol may not be useful in the clinic as an anti-leukemic agent.
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
2013-10-26
Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.
Sample size reduction in groundwater surveys via sparse data assimilation
Hussain, Z.
2013-04-01
In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.
Sample size reduction in groundwater surveys via sparse data assimilation
Hussain, Z.; Muhammad, A.
2013-01-01
In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.
On sample size and different interpretations of snow stability datasets
Schirmer, M.; Mitterer, C.; Schweizer, J.
2009-04-01
Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling
DEFF Research Database (Denmark)
Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas
2013-01-01
sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size...
Reduced oxygen at high altitude limits maximum size.
Peck, L S; Chapelle, G
2003-11-07
The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).
Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame
International Nuclear Information System (INIS)
Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.
2001-01-01
Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution
Analysis of time series and size of equivalent sample
International Nuclear Information System (INIS)
Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge
2004-01-01
In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions
Chèneby, D; Brauman, A; Rabary, B; Philippot, L
2009-05-01
The main objective of this study was to determine how the size, structure, and activity of the nitrate reducer community were affected by adoption of a conservative tillage system as an alternative to conventional tillage. The experimental field, established in Madagascar in 1991, consists of plots subjected to conventional tillage or direct-seeding mulch-based cropping systems (DM), both amended with three different fertilization regimes. Comparisons of size, structure, and activity of the nitrate reducer community in samples collected from the top layer in 2005 and 2006 revealed that all characteristics of this functional community were affected by the tillage system, with increased nitrate reduction activity and numbers of nitrate reducers under DM. Nitrate reduction activity was also stimulated by combined organic and mineral fertilization but not by organic fertilization alone. In contrast, both negative and positive effects of combined organic and mineral fertilization on the size of the nitrate reducer community were observed. The size of the nitrate reducer community was a significant predictor of the nitrate reduction rates except in one treatment, which highlighted the inherent complexities in understanding the relationships the between size, diversity, and structure of functional microbial communities along environmental gradients.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Directory of Open Access Journals (Sweden)
Malhotra Rajeev
2010-01-01
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
The ability of winter grazing to reduce wildfire size, intensity ...
A recent study by Davies et al. sought to test whether winter grazing could reduce wildfire size, fire behavior metrics, and fire-induced plant mortality in shrub-grasslands. The authors concluded that ungrazed rangelands may experience more fire-induced mortality of native perennial bunchgrasses. The authors also presented several statements regarding the benefits of winter grazing on post-fire plant community responses. However, this commentary will show that the study by Davies et al. has underlying methodological flaws, lacks data necessary to support their conclusions, and does not provide an accurate discussion on the effect of grazing on rangeland ecosystems. Importantly, Davies et al. presented no data on the post-fire mortality of the perennial bunchgrasses or on the changes in plant community composition following their experimental fires. Rather, Davies et al. inferred these conclusions based off their observed fire behavior metrics of maximum temperature and a term described as the “heat load”. However, neither metric is appropriate for elucidating the heat flux impacts on plants. This lack of post-fire data, several methodological flaws, and the use of inadequate metrics describing heat cast doubts on the authors’ ability to support their stated conclusions. This article is a commentary highlights the scientific shortcomings in a forthcoming paper by Davies et al. in the International Journal of Wildland Fire. The study has methodological flaw
Sample Size of One: Operational Qualitative Analysis in the Classroom
Directory of Open Access Journals (Sweden)
John Hoven
2015-10-01
Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.
Sample size for post-marketing safety studies based on historical controls.
Wu, Yu-te; Makuch, Robert W
2010-08-01
As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.
Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin
2014-01-01
This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the
A software sampling frequency adaptive algorithm for reducing spectral leakage
Institute of Scientific and Technical Information of China (English)
PAN Li-dong; WANG Fei
2006-01-01
Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.
14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices
Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet
2017-12-01
Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Energy Technology Data Exchange (ETDEWEB)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
A contemporary decennial global sample of changing agricultural field sizes
White, E.; Roy, D. P.
2011-12-01
In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Implications of clinical trial design on sample size requirements.
Leon, Andrew C
2008-07-01
The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...
7 CFR 52.775 - Sample unit size.
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... extraneous material—The total contents of each container in the sample. Factors of Quality ...
7 CFR 201.43 - Size of sample.
2010-01-01
... units. Coated seed for germination test only shall consist of at least 1,000 seed units. [10 FR 9950... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test, or..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT...
High Speed Gear Sized and Configured to Reduce Windage Loss
Kunz, Robert F. (Inventor); Medvitz, Richard B. (Inventor); Hill, Matthew John (Inventor)
2013-01-01
A gear and drive system utilizing the gear include teeth. Each of the teeth has a first side and a second side opposite the first side that extends from a body of the gear. For each tooth of the gear, a first extended portion is attached to the first side of the tooth to divert flow of fluid adjacent to the body of the gear to reduce windage losses that occur when the gear rotates. The gear may be utilized in drive systems that may have high rotational speeds, such as speeds where the tip velocities are greater than or equal to about 68 m/s. Some embodiments of the gear may also utilize teeth that also have second extended portions attached to the second sides of the teeth to divert flow of fluid adjacent to the body of the gear to reduce windage losses that occur when the gear rotates.
A long pulse modulator for reduced size and cost
International Nuclear Information System (INIS)
Pfeffer, H.; Bartelson, L.; Bourkland, K.; Jensen, C.; Kerns, Q.; Prieto, P.; Saewert, G.; Wolff, D.
1994-07-01
A novel modulator has been designed, built and tested for the TESLA test facility. This e + e - accelerator concept uses superconducting RF cavities and requires 2ms of RF power at 10 pps. As the final accelerator will require several hundred modulators, a cost effective, space saving and high efficiency design is desired. This modulator used a modest size switched capacitor bank that droops approximately 20% during the pulse. This large droop is compensated for by the use of a resonant LC circuit. The capacitor bank is connected to the high side of a pulse transformer primary using a series GTO switch. The resonant circuit is connected to the low side of the pulse transformer primary. The output pulse is flat to within 1% for 1.9 ms during a 2.3 ms base pulse width. Measured efficiency, from breaker to klystron and including energy lost in the rise time, is approximately 85%
Implications of Clinical Trial Design on Sample Size Requirements
Leon, Andrew C.
2008-01-01
The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two par...
Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P
2013-11-01
Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.
The attention-weighted sample-size model of visual short-term memory
DEFF Research Database (Denmark)
Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.
2016-01-01
exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...
Soetaert, K.; Heip, C.H.R.
1990-01-01
Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...
Transverse micro-erosion meter measurements; determining minimum sample size
Trenhaile, Alan S.; Lakhan, V. Chris
2011-11-01
Two transverse micro-erosion meter (TMEM) stations were installed in each of four rock slabs, a slate/shale, basalt, phyllite/schist, and sandstone. One station was sprayed each day with fresh water and the other with a synthetic sea water solution (salt water). To record changes in surface elevation (usually downwearing but with some swelling), 100 measurements (the pilot survey), the maximum for the TMEM used in this study, were made at each station in February 2010, and then at two-monthly intervals until February 2011. The data were normalized using Box-Cox transformations and analyzed to determine the minimum number of measurements needed to obtain station means that fall within a range of confidence limits of the population means, and the means of the pilot survey. The effect on the confidence limits of reducing an already small number of measurements (say 15 or less) is much greater than that of reducing a much larger number of measurements (say more than 50) by the same amount. There was a tendency for the number of measurements, for the same confidence limits, to increase with the rate of downwearing, although it was also dependent on whether the surface was treated with fresh or salt water. About 10 measurements often provided fairly reasonable estimates of rates of surface change but with fairly high percentage confidence intervals in slowly eroding rocks; however, many more measurements were generally needed to derive means within 10% of the population means. The results were tabulated and graphed to provide an indication of the approximate number of measurements required for given confidence limits, and the confidence limits that might be attained for a given number of measurements.
Sample size reassessment for a two-stage design controlling the false discovery rate.
Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin
2015-11-01
Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186
International Nuclear Information System (INIS)
Frothingham, David; Barker, Michelle; Buechi, Steve; Durham, Lisa
2013-01-01
Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recovery and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil
Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186
Energy Technology Data Exchange (ETDEWEB)
Frothingham, David; Barker, Michelle; Buechi, Steve [U.S. Army Corps of Engineers Buffalo District, 1776 Niagara St., Buffalo, NY 14207 (United States); Durham, Lisa [Argonne National Laboratory, Environmental Science Division, 9700 S. Cass Ave., Argonne, IL 60439 (United States)
2013-07-01
Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recovery and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the
Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model
Taylor, Douglas J.; Muller, Keith E.
1995-01-01
The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...
Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride
2015-08-01
ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Sampling bee communities using pan traps: alternative methods increase sample size
Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...
Eisenberg, Sarita L.; Guo, Ling-Yu
2015-01-01
Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…
Sizing Optimization and Strength Analysis for Spread-type Gear Reducers
Directory of Open Access Journals (Sweden)
Wei-Hsuan Hsu
2014-08-01
Full Text Available A reducer is now developed towards the trend of customization service and cost-saving. In this study, a sizing program for the reducer has been developed in order to replace the manual sizing process. We aim at the total center distance of the gear reducer for optimization to reduce gear volume and weight. Also, we checked constrains such as, tooth root bending, tooth contact strength, gear shaft endangered cross-section, bearing life, gear shaft deflection, and torsion angle deformation, etc., to obtain reliable drive strength. Comparisons of sizes and weights before and after optimization confirm that the purpose for reducing production cost is achieved.
CT dose survey in adults: what sample size for what precision?
International Nuclear Information System (INIS)
Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis
2017-01-01
To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)
CT dose survey in adults: what sample size for what precision?
Energy Technology Data Exchange (ETDEWEB)
Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)
2017-01-15
To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)
DEFF Research Database (Denmark)
Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten
2009-01-01
PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...
The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions
International Nuclear Information System (INIS)
John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.
2000-01-01
Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests
Directory of Open Access Journals (Sweden)
Bruno Giacomini Sari
2017-09-01
Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
Directory of Open Access Journals (Sweden)
Elias Chaibub Neto
Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Directory of Open Access Journals (Sweden)
Ian J Fiske
Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
The PowerAtlas: a power and sample size atlas for microarray experimental design and research
Directory of Open Access Journals (Sweden)
Wang Jelai
2006-02-01
Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing
Directory of Open Access Journals (Sweden)
Thomaz C. e C. da Costa
2004-12-01
Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.
What is the optimum sample size for the study of peatland testate amoeba assemblages?
Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J
2017-10-01
Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.
Sample Size and Saturation in PhD Studies Using Qualitative Interviews
Directory of Open Access Journals (Sweden)
Mark Mason
2010-08-01
Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387
Lawson, Chris A
2014-07-01
Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Test of methods for retrospective activity size distribution determination from filter samples
International Nuclear Information System (INIS)
Meisenberg, Oliver; Tschiersch, Jochen
2015-01-01
Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter
Frictional behaviour of sandstone: A sample-size dependent triaxial investigation
Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus
2017-01-01
Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Small portion sizes in worksite cafeterias: do they help consumers to reduce their food intake?
Vermeer, W.M.; Steenhuis, I.H.M.; Leeuwis, F.H.; Heijmans, M.W.; Seidell, J.C.
2011-01-01
Background:Environmental interventions directed at portion size might help consumers to reduce their food intake.Objective:To assess whether offering a smaller hot meal, in addition to the existing size, stimulates people to replace their large meal with a smaller meal.Design:Longitudinal randomized
Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size
ALWI, IDRUS
2011-01-01
The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size, 200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...
Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.
2011-01-01
The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...
Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.
2018-04-01
Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Sample size choices for XRCT scanning of highly unsaturated soil mixtures
Directory of Open Access Journals (Sweden)
Smith Jonathan C.
2016-01-01
Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.
The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.
Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J
2018-07-01
This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Page, G P; Amos, C I; Boerwinkle, E
1998-04-01
We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.
Rambo, Robert P
2017-01-01
The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.
Optimum sample size to estimate mean parasite abundance in fish parasite surveys
Directory of Open Access Journals (Sweden)
Shvydka S.
2018-03-01
Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L
2014-01-01
The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
Generating Random Samples of a Given Size Using Social Security Numbers.
Erickson, Richard C.; Brauchle, Paul E.
1984-01-01
The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)
Page sample size in web accessibility testing: how many pages is enough?
Velleman, Eric Martin; van der Geest, Thea
2013-01-01
Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This
Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests
Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.
2015-01-01
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…
Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc
2012-11-01
Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling
Sample size calculation to externally validate scoring systems based on logistic regression models.
Directory of Open Access Journals (Sweden)
Antonio Palazón-Bru
Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.
Precision of quantization of the hall conductivity in a finite-size sample: Power law
International Nuclear Information System (INIS)
Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.
2006-01-01
A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples
Constrained statistical inference: sample-size tables for ANOVA and regression
Directory of Open Access Journals (Sweden)
Leonard eVanbrabant
2015-01-01
Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.
Reducing the sampling frequency of groundwater monitoring wells
Energy Technology Data Exchange (ETDEWEB)
Johnson, V.M.; Ridley, M.N. [Lawrence Livermore National Lab., CA (United States); Tuckfield, R.C.; Anderson, R.A. [Westinghouse, Savannah River Co., Aiken, SC (United States)
1996-01-01
As part of a joint LLNL/SRTC project, a methodology for selecting sampling frequencies is evolving that introduces statistical thinking and cost effectiveness into the sampling schedule selection practices now commonly employed on environmental projects. Our current emphasis is on descriptive rather than inferential statistics. Environmental monitoring data are inherently messy, being plagued by such problems as extremely high variability and left-censoring. As a result, real data often fail to meet the assumptions required for the appropriate application of many statistical methods. Rather than abandon the quantitative approach in these cases, however, the methodology employs simple statistical techniques to bring a measure of objectivity and reproducibility to the process. The techniques are applied within the framework of decision logic, which inrerprets the numerical results from the standpoint of chemistry-related professional judgment and the regulatory context. This paper presents the methodology`s basic concepts together with early implementation results, showing the estimated cost savings. 6 refs., 3 figs.
Directory of Open Access Journals (Sweden)
Stefanović Milena
2013-01-01
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
All about size? – The potential of downsizing in reducing energy demand
International Nuclear Information System (INIS)
Huebner, Gesche M.; Shipworth, David
2017-01-01
Highlights: • Building size has huge impact on residential energy consumption. • There is significant underoccupation in English homes, even in cities. • Huge energy savings are possible if people downsize (move into smaller homes). • Lack of alternative, smaller accommodation structural barrier to downsizing. - Abstract: Residential energy consumption is one of the main contributors to CO_2 emissions in the UK. One strategy aimed at reducing emissions is to increase retrofitting rates of buildings. In this paper, an alternative approach is discussed and its potential impact on energy use assessed, that of downsizing (moving to smaller homes). Reviews of previous research show that a wide range of what can be termed psychological barriers exist to downsizing, such as the loss of ownership and independence, concern about what to do with possessions, not having enough space for visitors, and attachment to one’s home. Benefits of downsizing from a personal perspective are economic, with lower bills and/or rent, release of capital, lower maintenance costs, and also potential lifestyle improvements including living in easier-to-maintain and more age-appropriate housing. Wider societal benefits include the potential to significantly reduce energy consumption, and mitigating the housing crisis in cities where not enough properties are available. Empirical analysis on a nationally representative sample in England showed that building size alone accounts for 24% of the variability in energy consumption (compared to 11% of household size). If single-person households with more than two bedrooms downsized by one bedroom, energy-savings of 8% could be achieved, and if single-person households occupied only one bedroom, savings of 27%. Data also showed a significant amount of underoccupation, with almost two-thirds of households having more bedrooms than considered necessary compared to the bedroom-standard. However, analysis also revealed a structural barrier to
Reducing the standard serving size of alcoholic beverages prompts reductions in alcohol consumption.
Kersbergen, Inge; Oldham, Melissa; Jones, Andrew; Field, Matt; Angus, Colin; Robinson, Eric
2018-05-14
To test whether reducing the standard serving size of alcoholic beverages would reduce voluntary alcohol consumption in a laboratory (study 1) and a real-world drinking environment (study 2). Additionally, we modelled the potential public health benefit of reducing the standard serving size of on-trade alcoholic beverages in the United Kingdom. Studies 1 and 2 were cluster-randomized experiments. In the additional study, we used the Sheffield Alcohol Policy Model to estimate the number of deaths and hospital admissions that would be averted per year in the United Kingdom if a policy that reduces alcohol serving sizes in the on-trade was introduced. A semi-naturalistic laboratory (study 1), a bar in Liverpool, UK (study 2). Students and university staff members (study 1: n = 114, mean age = 24.8 years, 74.6% female), residents from local community (study 2: n = 164, mean age = 34.9 years, 57.3% female). In study 1, participants were assigned randomly to receive standard or reduced serving sizes (by 25%) of alcohol during a laboratory drinking session. In study 2, customers at a bar were served alcohol in either standard or reduced serving sizes (by 28.6-33.3%). Outcome measures were units of alcohol consumed within 1 hour (study 1) and up to 3 hours (study 2). Serving size condition was the primary predictor. In study 1, a 25% reduction in alcohol serving size led to a 20.7-22.3% reduction in alcohol consumption. In study 2, a 28.6-33.3% reduction in alcohol serving size led to a 32.4-39.6% reduction in alcohol consumption. Modelling results indicated that decreasing the serving size of on-trade alcoholic beverages by 25% could reduce the number of alcohol-related hospital admissions and deaths per year in the United Kingdom by 4.4-10.5% and 5.6-13.2%, respectively. Reducing the serving size of alcoholic beverages in the United Kingdom appears to lead to a reduction in alcohol consumption within a single drinking occasion. © 2018 The Authors. Addiction
Andersson, Malte
2004-01-01
Sexual selection in the form of sperm competition is a major explanation for small size of male gametes. Can sexual selection in polyandrous species with reversed sex roles also lead to reduced female gamete size? Comparative studies show that egg size in birds tends to decrease as a lineage evolves social polyandry. Here, a quantitative genetic model predicts that female scrambles over mates lead to evolution of reduced female gamete size. Increased female mating success drives the evolution of smaller eggs, which take less time to produce, until balanced by lowered offspring survival. Mean egg size is usually reduced and polyandry increased by increasing sex ratio (male bias) and maximum possible number of mates. Polyandry also increases with the asynchrony (variance) in female breeding start. Opportunity for sexual selection increases with the maximum number of mates but decreases with increasing sex ratio. It is well known that parental investment can affect sexual selection. The model suggests that the influence is mutual: owing to a coevolutionary feedback loop, sexual selection in females also shapes initial parental investment by reducing egg size. Feedback between sexual selection and parental investment may be common.
Reduced body size and cub recruitment in polar bears associated with sea ice decline
Rode, Karyn D.; Amstrup, Steven C.; Regehr, Eric V.
2010-01-01
Rates of reproduction and survival are dependent upon adequate body size and condition of individuals. Declines in size and condition have provided early indicators of population decline in polar bears (Ursus maritimus) near the southern extreme of their range. We tested whether patterns in body size, condition, and cub recruitment of polar bears in the southern Beaufort Sea of Alaska were related to the availability of preferred sea ice habitats and whether these measures and habitat availability exhibited trends over time, between 1982 and 2006. The mean skull size and body length of all polar bears over three years of age declined over time, corresponding with long‐term declines in the spatial and temporal availability of sea ice habitat. Body size of young, growing bears declined over time and was smaller after years when sea ice availability was reduced. Reduced litter mass and numbers of yearlings per female following years with lower availability of optimal sea ice habitat, suggest reduced reproductive output and juvenile survival. These results, based on analysis of a long‐term data set, suggest that declining sea ice is associated with nutritional limitations that reduced body size and reproduction in this population.
A hard-to-read font reduces the framing effect in a large sample.
Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik
2018-04-01
How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.
Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas
Directory of Open Access Journals (Sweden)
Francisco J. Ariza-López
2018-05-01
Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.
Effects of sample size on robustness and prediction accuracy of a prognostic gene signature
Directory of Open Access Journals (Sweden)
Kim Seon-Young
2009-05-01
Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
Size-dependent mechanical properties of PVA nanofibers reduced via air plasma treatment
International Nuclear Information System (INIS)
Fu Qiang; Song Xuefeng; Gao Jingyun; Han Xiaobing; Zhao Qing; Yu Dapeng; Jin Yu; Jiang Xingyu
2010-01-01
Organic nanowires/fibers have great potential in applications such as organic electronics and soft electronic techniques. Therefore investigation of their mechanical performance is of importance. The Young's modulus of poly(vinyl alcohol) (PVA) nanofibers was analyzed by scanning probe microscopy (SPM) methods. Air plasma treatment was used to reduce the nanofibers to different sizes. Size-dependent mechanical properties of PVA nanofibers were studied and revealed that the Young's modulus increased dramatically when the scales became very small (<80 nm).
Size-dependent mechanical properties of PVA nanofibers reduced via air plasma treatment.
Fu, Qiang; Jin, Yu; Song, Xuefeng; Gao, Jingyun; Han, Xiaobing; Jiang, Xingyu; Zhao, Qing; Yu, Dapeng
2010-03-05
Organic nanowires/fibers have great potential in applications such as organic electronics and soft electronic techniques. Therefore investigation of their mechanical performance is of importance. The Young's modulus of poly(vinyl alcohol) (PVA) nanofibers was analyzed by scanning probe microscopy (SPM) methods. Air plasma treatment was used to reduce the nanofibers to different sizes. Size-dependent mechanical properties of PVA nanofibers were studied and revealed that the Young's modulus increased dramatically when the scales became very small (<80 nm).
Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies
Directory of Open Access Journals (Sweden)
Mark Heckmann
2017-01-01
Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.
Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette
2018-03-01
In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the
Overestimation of test performance by ROC analysis: Effect of small sample size
International Nuclear Information System (INIS)
Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.
1984-01-01
New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described
Performances of Different Fragment Sizes for Reduced Representation Bisulfite Sequencing in Pigs.
Yuan, Xiao-Long; Zhang, Zhe; Pan, Rong-Yang; Gao, Ning; Deng, Xi; Li, Bin; Zhang, Hao; Sangild, Per Torp; Li, Jia-Qi
2017-01-01
Reduced representation bisulfite sequencing (RRBS) has been widely used to profile genome-scale DNA methylation in mammalian genomes. However, the applications and technical performances of RRBS with different fragment sizes have not been systematically reported in pigs, which serve as one of the important biomedical models for humans. The aims of this study were to evaluate capacities of RRBS libraries with different fragment sizes to characterize the porcine genome. We found that the Msp I-digested segments between 40 and 220 bp harbored a high distribution peak at 74 bp, which were highly overlapped with the repetitive elements and might reduce the unique mapping alignment. The RRBS library of 110-220 bp fragment size had the highest unique mapping alignment and the lowest multiple alignment. The cost-effectiveness of the 40-110 bp, 110-220 bp and 40-220 bp fragment sizes might decrease when the dataset size was more than 70, 50 and 110 million reads for these three fragment sizes, respectively. Given a 50-million dataset size, the average sequencing depth of the detected CpG sites in the 110-220 bp fragment size appeared to be deeper than in the 40-110 bp and 40-220 bp fragment sizes, and these detected CpG sties differently located in gene- and CpG island-related regions. In this study, our results demonstrated that selections of fragment sizes could affect the numbers and sequencing depth of detected CpG sites as well as the cost-efficiency. No single solution of RRBS is optimal in all circumstances for investigating genome-scale DNA methylation. This work provides the useful knowledge on designing and executing RRBS for investigating the genome-wide DNA methylation in tissues from pigs.
Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.
Hanel, Paul H P; Haase, Jennifer
2017-01-01
In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.
Bayesian sample size determination for cost-effectiveness studies with censored data.
Directory of Open Access Journals (Sweden)
Daniel P Beavers
Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.
International Nuclear Information System (INIS)
Jaech, J.L.; Lemaire, R.J.
1986-11-01
Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures
International Nuclear Information System (INIS)
Bode, P.; Koster-Ammerlaan, M.J.J.
2018-01-01
Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different
Uncertainty budget in internal monostandard NAA for small and large size samples analysis
International Nuclear Information System (INIS)
Dasari, K.B.; Acharya, R.
2014-01-01
Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)
Directory of Open Access Journals (Sweden)
Esther Wong
Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
Energy Technology Data Exchange (ETDEWEB)
Reer, B
2004-03-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
International Nuclear Information System (INIS)
Reer, B.
2004-01-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Cavana, P; Petit, J-Y; Perrot, S; Guechi, R; Marignac, G; Reynaud, K; Guillot, J
2015-12-01
Shampoo therapy is often recommended for the control of Malassezia overgrowth in dogs. The aim of this study was to evaluate the in vivo activity of a 2% climbazole shampoo against Malassezia pachydermatis yeasts in naturally infected dogs. Eleven research colony Beagles were used. The dogs were distributed randomly into two groups: group A (n=6) and group B (n=5). Group A dogs were washed with a 2% climbazole shampoo, while group B dogs were treated with a physiological shampoo base. The shampoos were applied once weekly for two weeks. The population size of Malassezia yeasts on skin was determined by fungal culture through modified Dixon's medium contact plates pressed on left concave pinna, axillae, groins, perianal area before and after shampoo application. Samples collected were compared by Wilcoxon rank sum test. Samples collected after 2% climbazole shampoo application showed a significant and rapid reduction of Malassezia population sizes. One hour after the first climbazole shampoo application, Malassezia reduction was already statistically significant and 15 days after the second climbazole shampoo, Malassezia population sizes were still significantly decreased. No significant reduction of Malassezia population sizes was observed in group B dogs. The application of a 2% climbazole shampoo significantly reduced Malassezia population sizes on the skin of naturally infected dogs. Application of 2% climbazole shampoo may be useful for the control of Malassezia overgrowth and it may be also proposed as prevention when recurrences are frequent. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
The effect of reducing alfalfa haylage particle size on cows in early lactation.
Kononoff, P J; Heinrichs, A J
2003-04-01
The objective of this experiment was to evaluate effects of reducing forage particle size on cows in early lactation based on measurements of the Penn State Particle Separator (PSPS). Eight cannulated, multiparous cows averaging 19 +/- 4 d in milk and 642 +/- 45 kg BW were assigned to one of two 4 x 4 Latin Squares. During each of the 23-d periods, animals were offered one of four diets, which were chemically identical but included alfalfa haylage of different particle size; short (SH), mostly short (MSH), mostly long (MLG), and long (LG). Physically effective neutral detergent fiber (peNDF) was determined by measuring the amount of neutral detergent fiber retained on a 1.18 mm screen and was similar across diets (25.7, 26.2, 26.4, 26.7%) but the amount of particles >19.0 mm significantly decreased with decreasing particle size. Reducing haylage particle size increased dry matter intake linearly (23.3, 22.0, 20.9, 20.8 kg for SH, MSH, MLG, LG, respectively). Milk production and percentage fat did not differ across treatments averaging 35.5 +/- 0.68 kg milk and 3.32 +/- 0.67% fat, while a quadratic effect was observed for percent milk protein, with lowest values being observed for LG. A quadratic effect was observed for mean rumen pH (6.04, 6.15, 6.13, 6.09), while A:P ratio decreased linearly (2.75, 2.86, 2.88, 2.92) with decreasing particle size. Total time ruminating increased quadratically (467, 498, 486, 468 min/d), while time eating decreased linearly (262, 253, 298, 287 min/d) with decreasing particle size. Both eating and ruminating per unit of neutral detergent fiber intake decreased with reducing particle size (35.8, 36.7, 44.9, 45.6 min/kg; 19.9, 23.6, 23.5, 23.5 min/kg). Although chewing activity was closely related to forage particle size, effects on rumen pH were small, indicating factors other than particle size are critical in regulating pH when ration neutral detergent fiber met recommended levels. Feeding alfalfa haylage based rations of reduced
Marchiori, D.R.; Papies, E.K.
2014-01-01
Objective: The present research examined the effects of a mindfulness-based intervention to foster healthy eating. Specifically, we tested whether a brief mindfulness manipulation can prevent the portion size effect, and reduce overeating on unhealthy snacks when hungry. Methods: 110 undergraduate
Implantation of cocoa butter reduces egg and hatchling size in Salmo trutta
Hoogenboom, M. O.; Armstrong, J. D.; Miles, M. S.; Burton, T.; Groothuis, T. G. G.; Metcalfe, N. B.
This study demonstrated that, irrespective of hormone type or dose, administering cocoa butter implants during egg development affected the growth of female brown trout Salmo trutta and reduced the size of their offspring. Cortisol treatment also increased adult mortality. Caution is urged in the
Re-estimating sample size in cluster randomized trials with active recruitment within clusters
van Schie, Sander; Moerbeek, Mirjam
2014-01-01
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies
McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.
2010-01-01
This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.
Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling
Czech Academy of Sciences Publication Activity Database
Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír
2015-01-01
Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015
Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions
Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.
2013-01-01
Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of
The Effects of Test Length and Sample Size on Item Parameters in Item Response Theory
Sahin, Alper; Anil, Duygu
2017-01-01
This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill
2017-01-01
Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...
Sample size determination for disease prevalence studies with partially validated data.
Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai
2016-02-01
Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.
B-graph sampling to estimate the size of a hidden population
Spreen, M.; Bogaerts, S.
2015-01-01
Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is
Required sample size for monitoring stand dynamics in strict forest reserves: a case study
Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust
2000-01-01
Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...
A simple sample size formula for analysis of covariance in cluster randomized trials.
Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.
2012-01-01
For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Estimating sample size for a small-quadrat method of botanical ...
African Journals Online (AJOL)
Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.
[Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].
Fu, Yingkun; Xie, Yanming
2011-10-01
In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.
International Nuclear Information System (INIS)
Sampson, T.E.
1991-01-01
Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts
Collection of size fractionated particulate matter sample for neutron activation analysis in Japan
International Nuclear Information System (INIS)
Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru
2004-01-01
According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)
Support vector regression to predict porosity and permeability: Effect of sample size
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function
Marchiori, David; Waroquier, Laurent; Klein, Olivier
2011-05-01
Studies considering the impact of food-size variations on consumption have predominantly focused on portion size, whereas very little research has investigated variations in food-item size, especially at snacking occasions, and results have been contradictory. This study evaluated the effect of altering the size of food items (ie, small vs large candies) of equal-size food portions on short-term energy intake while snacking. The study used a between-subjects design (n=33) in a randomized experiment conducted in spring 2008. In a psychology laboratory (separate cubicles), participants (undergraduate psychology students, 29 of 33 female, mean age 20.3±2 years, mean body mass index 21.7±3.7) were offered unlimited consumption of candies while participating in an unrelated computerized experiment. For half of the subjects, items were cut in two to make the small food-item size. Food intake (weight in grams, kilocalories, and number of food items) was examined using analysis of variance. Results showed that decreasing the item size of candies led participants to decrease by half their gram weight intake, resulting in an energy intake decrease of 60 kcal compared to the other group. Appetite ratings and subject and food characteristics had no moderating effect. A cognitive bias could explain why people tend to consider that one unit of food (eg, 10 candies) is the appropriate amount to consume, regardless of the size of the food items in the unit. This study suggests a simple dietary strategy, decreasing food-item size without having to alter the portion size offered, may reduce energy intake at snacking occasions. Copyright © 2011 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples
International Nuclear Information System (INIS)
Smith, D.L.
1975-09-01
The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)
In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.
Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele
2017-04-19
During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.
The impact of sample size and marker selection on the study of haplotype structures
Directory of Open Access Journals (Sweden)
Sun Xiao
2004-03-01
Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.
Size selective isocyanate aerosols personal air sampling using porous plastic foams
International Nuclear Information System (INIS)
Cong Khanh Huynh; Trinh Vu Duc
2009-01-01
As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.
PIXE–PIGE analysis of size-segregated aerosol samples from remote areas
Energy Technology Data Exchange (ETDEWEB)
Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)
2014-01-01
The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.
Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E
2018-07-01
The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Sonne, Christian; Leifsson, Páll Skuli; Dietz, Rune
2006-01-01
Reproductive organs from 55 male and 44 female East Greenland polar bears were examined to investigate the potential negative impact from organohalogen pollutants (OHCs). Multiple regressions normalizing for age showed a significant inverse relationship between OHCs and testis length and baculum.......01) and uterine horn length and HCB (p = 0.02). The study suggests thatthere is an impact from xenoendocrine pollutants on the size of East Greenland polar bear genitalia. This may pose a riskto this polar bear subpopulation in the future because of reduced sperm and egg quality/quantity and uterus and penis size...
Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.
Power, Stephanie M; Matic, Damir B
2013-03-01
Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P = .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Aversion learning can reduce meal size without taste avoidance in rats.
Tracy, Andrea L; Schurdak, Jennifer D; Chambers, James B; Benoit, Stephen C
2016-03-01
Nausea and aversive food responses are commonly reported following bariatric surgery, along with post-surgical reduction in meal size. This study investigates whether a meal size limit can be conditioned by associating large meals with aversive outcomes. In rats, the intake of meals exceeding a pre-defined size threshold was paired with lithium chloride-induced gastric illness, and the effects on self-determined food intakes and body weight were measured. Rats given LiCl contingent on the intake of a large meal learned to reliably reduce intake below this meal size threshold, while post-meal saline or LiCl before meals did not change meal size. It was further demonstrated that this is not a conditioned taste aversion and that this effect transferred to foods not explicitly trained. Finally, when rats received LiCl following all large meals, the number of small meals increased, but total food intake and body weight decreased. While further work is needed, this is the first demonstration that meal size may be conditioned, using an aversion procedure, to remain under a target threshold and that this effect is distinct from taste avoidance. Corresponding reduction in food intake and body weight suggests that this phenomenon may have implications for developing weight loss strategies and understanding the efficacy of bariatric surgery. © 2016 The Obesity Society.
Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G
2010-11-01
Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.
Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.
Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang
2018-02-01
To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.
Characterization of test specimens produced in reduced size for X-ray microtomography (µ-CT tests
Directory of Open Access Journals (Sweden)
E. E. BERNARDES
Full Text Available Abstract The need to use reduced sample sizes, in order to attain improved spatial resolution in (µ-CT tests applied in Portland cement composites, makes researchers perform the fractionation of materials to obtain samples with dimensions compatible with the capacity of the scanning equipment, which might cause alterations in the microstructure under analysis. Therefore, a test specimen (TS with dimensions compatible with the scanning capacity of a microtomography system that operates with an X-ray tube and voltage ranging from 20 to 100 kV was proposed. Axial compression strength tests were made and their total porosity was assessed by an apparent density and solid fraction density ratio, which were obtained by means of mercury and helium pycnometry and µ-CT technique, respectively. The adoption of that TS has shown to be viable for providing a sample with a higher level of representation.
Reduced clot debris size using standing waves formed via high intensity focused ultrasound
Guo, Shifang; Du, Xuan; Wang, Xin; Lu, Shukuan; Shi, Aiwei; Xu, Shanshan; Bouakaz, Ayache; Wan, Mingxi
2017-09-01
The feasibility of utilizing high intensity focused ultrasound (HIFU) to induce thrombolysis has been demonstrated previously. However, clinical concerns still remain related to the clot debris produced via fragmentation of the original clot potentially being too large and hence occluding downstream vessels, causing hazardous emboli. This study investigates the use of standing wave fields formed via HIFU to disintegrate the thrombus while achieving a reduced clot debris size in vitro. The results showed that the average diameter of the clot debris calculated by volume percentage was smaller in the standing wave mode than in the travelling wave mode at identical ultrasound thrombolysis settings. Furthermore, the inertial cavitation dose was shown to be lower in the standing wave mode, while the estimated cavitation bubble size distribution was similar in both modes. These results show that a reduction of the clot debris size with standing waves may be attributed to the particle trapping of the acoustic potential well which contributed to particle fragmentation.
A new cone-beam X-ray CT system with a reduced size planar detector
International Nuclear Information System (INIS)
Li Liang; Chen Zhiqiang; Zhang Li; Xing Yuxiang; Kang Kejun
2006-01-01
In a traditional cone-beam CT system, the cost of product and computation is very high. The authors propose a transversely truncated cone-beam X-ray CT system with a reduced size detector positioned off-center, in which X-ray beams only cover half of the object. The reduced detector size cuts the cost and the X-ray dose of the CT system. The existing CT reconstruction algorithms are not directly applicable in this new CT system. Hence, the authors develop a BPF-type direct backprojection algorithm. Different from the traditional rebinding methods, our algorithm directly backprojects the pretreated projection data without rebinding. This makes the algorithm compact and computationally more efficient. Finally, some numerical simulations and practical experiments are done to validate the proposed algorithm. (authors)
Crystallite size variation of TiO_2 samples depending time heat treatment
International Nuclear Information System (INIS)
Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.
2016-01-01
Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)
The study of the sample size on the transverse magnetoresistance of bismuth nanowires
International Nuclear Information System (INIS)
Zare, M.; Layeghnejad, R.; Sadeghi, E.
2012-01-01
The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.
Evaluating the performance of species richness estimators: sensitivity to sample grain size
DEFF Research Database (Denmark)
Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara
2006-01-01
and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3. Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...
A contemporary decennial global Landsat sample of changing agricultural field sizes
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
International Nuclear Information System (INIS)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.
2013-01-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
Energy Technology Data Exchange (ETDEWEB)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-07-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies
Heckmann, Mark; Burk, Lukas
2017-01-01
The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...
Cyclosporine A at reperfusion fails to reduce infarct size in the in vivo rat heart.
De Paulis, Damien; Chiari, Pascal; Teixeira, Geoffrey; Couture-Lepetit, Elisabeth; Abrial, Maryline; Argaud, Laurent; Gharib, Abdallah; Ovize, Michel
2013-09-01
We examined the effects on infarct size and mitochondrial function of ischemic (Isch), cyclosporine A (CsA) and isoflurane (Iso) preconditioning and postconditioning in the in vivo rat model. Anesthetized open-chest rats underwent 30 min of ischemia followed by either 120 min (protocol 1: infarct size assessment) or 15 min of reperfusion (protocol 2: assessment of mitochondrial function). All treatments administered before the 30-min ischemia (Pre-Isch, Pre-CsA, Pre-Iso) significantly reduced infarct as compared to control. In contrast, only Post-Iso significantly reduced infarct size, while Post-Isch and Post-CsA had no significant protective effect. As for the postconditioning-like interventions, the mitochondrial calcium retention capacity significantly increased only in the Post-Iso group (+58 % vs control) after succinate activation. Only Post-Iso increased state 3 (+177 and +62 %, for G/M and succinate, respectively) when compared to control. Also, Post-Iso reduced the hydrogen peroxide (H2O2) production (-46 % vs control) after complex I activation. This study suggests that isoflurane, but not cyclosporine A, can prevent lethal reperfusion injury in this in vivo rat model. This might be related to the need for a combined effect on cyclophilin D and complex I during the first minutes of reperfusion.
A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies
Directory of Open Access Journals (Sweden)
Hojin Moon
2002-12-01
Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sample size methods for estimating HIV incidence from cross-sectional surveys.
Konikoff, Jacob; Brookmeyer, Ron
2015-12-01
Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
Sampling and treatment of rock cores and groundwater under reducing environments of deep underground
International Nuclear Information System (INIS)
Ebashi, Katsuhiro; Yamaguchi, Tetsuji; Tanaka, Tadao
2005-01-01
A method of sampling and treatment of undisturbed rock cores and groundwater under maintained reducing environments of deep underground was developed and demonstrated in a Neogene's sandy mudstone layer at depth of GL-100 to -200 m. Undisturbed rock cores and groundwater were sampled and transferred into an Ar gas atmospheric glove box with minimized exposure to the atmosphere. The reducing conditions of the sampled groundwater and rock cores were examined in the Ar atmospheric glove box by measuring pH and Eh of the sampled groundwater and sampled groundwater contacting with disk type rock samples, respectively. (author)
Morphine Reduces Myocardial Infarct Size via Heat Shock Protein 90 in Rodents
Directory of Open Access Journals (Sweden)
Bryce A. Small
2015-01-01
Full Text Available Opioids reduce injury from myocardial ischemia-reperfusion in humans. In experimental models, this mechanism involves GSK3β inhibition. HSP90 regulates mitochondrial protein import, with GSK3β inhibition increasing HSP90 mitochondrial content. Therefore, we determined whether morphine-induced cardioprotection is mediated by HSP90 and if the protective effect is downstream of GSK3β inhibition. Male Sprague-Dawley rats, aged 8–10 weeks, were subjected to an in vivo myocardial ischemia-reperfusion injury protocol involving 30 minutes of ischemia followed by 2 hours of reperfusion. Hemodynamics were continually monitored and myocardial infarct size determined. Rats received morphine (0.3 mg/kg, the GSK3β inhibitor, SB216763 (0.6 mg/kg, or saline, 10 minutes prior to ischemia. Some rats received selective HSP90 inhibitors, radicicol (0.3 mg/kg, or deoxyspergualin (DSG, 0.6 mg/kg alone or 5 minutes prior to morphine or SB216763. Morphine reduced myocardial infarct size when compared to control (42 ± 2% versus 60 ± 1%. This protection was abolished by prior treatment of radicicol or DSG (59 ± 1%, 56 ± 2%. GSK3β inhibition also reduced myocardial infarct size (41 ± 2% with HSP90 inhibition by radicicol or DSG partially inhibiting SB216763-induced infarct size reduction (54 ± 3%, 47 ± 1%, resp.. These data suggest that opioid-induced cardioprotection is mediated by HSP90. Part of this protection afforded by HSP90 is downstream of GSK3β, potentially via the HSP-TOM mitochondrial import pathway.
Broadhurst, Matt K.; Sterling, David J.; Millar, Russell B.
2014-01-01
The effects of reducing mesh size while concomitantly varying the side taper and wing depth of a generic penaeid-trawl body were investigated to improve engineering performance and minimize bycatch. Five trawl bodies (with the same codends) were tested across various environmental (e.g. depth and current) and biological (e.g. species and sizes) conditions. The first trawl body comprised 41-mm mesh and represented conventional designs (termed the ‘41 long deep-wing'), while the remaining trawl bodies were made from 32-mm mesh and differed only in their side tapers, and therefore length (i.e. 1N3B or ‘long’ and ∼28o to the tow direction vs 1N5B or ‘short’ and ∼35o) and wing depths (‘deep’–97 T vs ‘shallow’–60 T). There were incremental drag reductions (and therefore fuel savings – by up to 18 and 12% per h and ha trawled) associated with reducing twine area via either modification, and subsequently minimizing otter-board area in attempts to standardize spread. Side taper and wing depth had interactive and varied effects on species selectivity, but compared to the conventional 41 long deep-wing trawl, the 32 short shallow-wing trawl (i.e. the least twine area) reduced the total bycatch by 57% (attributed to more fish swimming forward and escaping). In most cases, all small-meshed trawls also caught more smaller school prawns Metapenaeus macleayi but to decrease this effect it should be possible to increase mesh size slightly, while still maintaining the above engineering benefits and species selectivity. The results support precisely optimizing mesh size as a precursor to any other anterior penaeid-trawl modifications designed to improve environmental performance. PMID:24911786
Sumner, Anne E; Luercio, Marcella F; Frempong, Barbara A; Ricks, Madia; Sen, Sabyasachi; Kushner, Harvey; Tulloch-Reid, Marshall K
2009-02-01
The disposition index, the product of the insulin sensitivity index (S(I)) and the acute insulin response to glucose, is linked in African Americans to chromosome 11q. This link was determined with S(I) calculated with the nonlinear regression approach to the minimal model and data from the reduced-sample insulin-modified frequently-sampled intravenous glucose tolerance test (Reduced-Sample-IM-FSIGT). However, the application of the nonlinear regression approach to calculate S(I) using data from the Reduced-Sample-IM-FSIGT has been challenged as being not only inaccurate but also having a high failure rate in insulin-resistant subjects. Our goal was to determine the accuracy and failure rate of the Reduced-Sample-IM-FSIGT using the nonlinear regression approach to the minimal model. With S(I) from the Full-Sample-IM-FSIGT considered the standard and using the nonlinear regression approach to the minimal model, we compared the agreement between S(I) from the Full- and Reduced-Sample-IM-FSIGT protocols. One hundred African Americans (body mass index, 31.3 +/- 7.6 kg/m(2) [mean +/- SD]; range, 19.0-56.9 kg/m(2)) had FSIGTs. Glucose (0.3 g/kg) was given at baseline. Insulin was infused from 20 to 25 minutes (total insulin dose, 0.02 U/kg). For the Full-Sample-IM-FSIGT, S(I) was calculated based on the glucose and insulin samples taken at -1, 1, 2, 3, 4, 5, 6, 7, 8,10, 12, 14, 16, 19, 22, 23, 24, 25, 27, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150, and 180 minutes. For the Reduced-Sample-FSIGT, S(I) was calculated based on the time points that appear in bold. Agreement was determined by Spearman correlation, concordance, and the Bland-Altman method. In addition, for both protocols, the population was divided into tertiles of S(I). Insulin resistance was defined by the lowest tertile of S(I) from the Full-Sample-IM-FSIGT. The distribution of subjects across tertiles was compared by rank order and kappa statistic. We found that the rate of failure of resolution of S(I) by
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Methodology for sample preparation and size measurement of commercial ZnO nanoparticles
Directory of Open Access Journals (Sweden)
Pei-Jia Lu
2018-04-01
Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology
The Effect of Sterilization on Size and Shape of Fat Globules in Model Processed Cheese Samples
Directory of Open Access Journals (Sweden)
B. Tremlová
2006-01-01
Full Text Available Model cheese samples from 4 independent productions were heat sterilized (117 °C, 20 minutes after the melting process and packing with an aim to prolong their durability. The objective of the study was to assess changes in the size and shape of fat globules due to heat sterilization by using image analysis methods. The study included a selection of suitable methods of preparation mounts, taking microphotographs and making overlays for automatic processing of photographs by image analyser, ascertaining parameters to determine the size and shape of fat globules and statistical analysis of results obtained. The results of the experiment suggest that changes in shape of fat globules due to heat sterilization are not unequivocal. We found that the size of fat globules was significantly increased (p < 0.01 due to heat sterilization (117 °C, 20 min, and the shares of small fat globules (up to 500 μm2, or 100 μm2 in the samples of heat sterilized processed cheese were decreased. The results imply that the image analysis method is very useful when assessing the effect of technological process on the quality of processed cheese quality.
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.
2005-01-01
smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.
Vegué, Marina; Perin, Rodrigo; Roxin, Alex
2017-08-30
The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
DEFF Research Database (Denmark)
Sonne, Christian; Leifsson, Pall S.; Dietz, Rune
2006-01-01
.01) and uterine horn length and HCB (p = 0.02). The study suggests thatthere is an impact from xenoendocrine pollutants on the size of East Greenland polar bear genitalia. This may pose a riskto this polar bear subpopulation in the future because of reduced sperm and egg quality/quantity and uterus and penis size......Reproductive organs from 55 male and 44 female East Greenland polar bears were examined to investigate the potential negative impact from organohalogen pollutants (OHCs). Multiple regressions normalizing for age showed a significant inverse relationship between OHCs and testis length and baculum...... length and weight, respectively, and was found in both subadults (dichlorodiphenyl trichloroethanes, dieldrin, chlordanes, hexacyclohexanes, polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs)) and adults (hexachlorobenzene [HCB]) (all p
Directory of Open Access Journals (Sweden)
Iris eVersluis
2016-05-01
Full Text Available People typically eat more from large portions of food than from small portions. An explanation that has often been given for this so-called portion size effect is that the portion size acts as a social norm and as such communicates how much is appropriate to eat. In this paper, we tested this explanation by examining whether manipulating the relevance of the portion size as a social norm changes the portion size effect, as assessed by prospective consumption decisions. We conducted one pilot experiment and one full experiment in which participants respectively indicated how much they would eat or serve themselves from a given amount of different foods. In the pilot (N = 63, we manipulated normative relevance by allegedly basing the portion size on the behavior of either students of the own university (in-group or of another university (out-group. In the main experiment (N = 321, we told participants that either a minority or majority of people similar to them approved of the portion size. Results show that in both experiments, participants expected to serve themselves and to eat more from larger than from smaller portions. As expected, however, the portion size effect was less pronounced when the reference portions were allegedly based on the behavior of an out-group (pilot or approved only by a minority (main experiment. These findings suggest that the portion size indeed provides normative information, because participants were less influenced by it if it communicated the behaviors or values of a less relevant social group. In addition, in the main experiment, the relation between portion size and the expected amount served was partially mediated by the amount that was considered appropriate, suggesting that concerns about eating an appropriate amount indeed play a role in the portion size effect. However, since the portion size effect was weakened but not eliminated by the normative relevance manipulations and since mediation was only partial
Sample size effect on the determination of the irreversibility line of high-Tc superconductors
International Nuclear Information System (INIS)
Li, Q.; Suenaga, M.; Li, Q.; Freltoft, T.
1994-01-01
The irreversibility lines of a high-J c superconducting Bi 2 Sr 2 Ca 2 Cu 3 O x /Ag tape were systematically measured upon a sequence of subdivisions of the sample. The irreversibility field H r (T) (parallel to the c axis) was found to change approximately as L 0.13 , where L is the effective dimension of the superconducting tape. Furthermore, it was found that the irreversibility line for a grain-aligned Bi 2 Sr 2 Ca 2 Cu 3 O x specimen can be approximately reproduced by the extrapolation of this relation down to a grain size of a few tens of micrometers. The observed size effect could significantly obscure the real physical meaning of the irreversibility lines. In addition, this finding surprisingly indicated that the Bi 2 Sr 2 Ca 2 Cu 2 O x /Ag tape and grain-aligned specimen may have similar flux line pinning strength
2011-01-01
Background Although free radicals have been reported to play a role in the expansion of ischemic brain lesions, the effect of free radical scavengers is still under debate. In this study, the temporal profile of ischemic stroke lesion sizes was assessed for more than one year to evaluate the effect of edaravone which might reduce ischemic damage. Methods We sequentially enrolled acute ischemic stroke patients, who admitted between April 2003 and March 2004, into the edaravone(-) group (n = 83) and, who admitted between April 2004 and March 2005, into the edaravone(+) group (n = 93). Because, edaravone has been used as the standard treatment after April 2004 in our hospital. To assess the temporal profile of the stroke lesion size, the ratio of the area [T2-weighted magnetic resonance images (T2WI)/iffusion-weighted magnetic resonance images (DWI)] were calculated. Observations on T2WI were continued beyond one year, and observational times were classified into subacute (1-2 months after the onset), early chronic (3-6 month), late chronic (7-12 months) and old (≥13 months) stages. Neurological deficits were assessed by the National Institutes of Health Stroke Scale upon admission and at discharge and by the modified Rankin Scale at 1 year following stroke onset. Results Stroke lesion size was significantly attenuated in the edaravone(+) group compared with the edaravone(-) group in the period of early and late chronic observational stages. However, this reduction in lesion size was significant within a year and only for the small-vessel occlusion stroke patients treated with edaravone. Moreover, patients with small-vessel occlusion strokes that were treated with edaravone showed significant neurological improvement during their hospital stay, although there were no significant differences in outcome one year after the stroke. Conclusion Edaravone treatment reduced the volume of the infarct and improved neurological deficits during the subacute period, especially
Directory of Open Access Journals (Sweden)
Suzuki Akifumi
2011-03-01
Full Text Available Abstract Background Although free radicals have been reported to play a role in the expansion of ischemic brain lesions, the effect of free radical scavengers is still under debate. In this study, the temporal profile of ischemic stroke lesion sizes was assessed for more than one year to evaluate the effect of edaravone which might reduce ischemic damage. Methods We sequentially enrolled acute ischemic stroke patients, who admitted between April 2003 and March 2004, into the edaravone(- group (n = 83 and, who admitted between April 2004 and March 2005, into the edaravone(+ group (n = 93. Because, edaravone has been used as the standard treatment after April 2004 in our hospital. To assess the temporal profile of the stroke lesion size, the ratio of the area [T2-weighted magnetic resonance images (T2WI/iffusion-weighted magnetic resonance images (DWI] were calculated. Observations on T2WI were continued beyond one year, and observational times were classified into subacute (1-2 months after the onset, early chronic (3-6 month, late chronic (7-12 months and old (≥13 months stages. Neurological deficits were assessed by the National Institutes of Health Stroke Scale upon admission and at discharge and by the modified Rankin Scale at 1 year following stroke onset. Results Stroke lesion size was significantly attenuated in the edaravone(+ group compared with the edaravone(- group in the period of early and late chronic observational stages. However, this reduction in lesion size was significant within a year and only for the small-vessel occlusion stroke patients treated with edaravone. Moreover, patients with small-vessel occlusion strokes that were treated with edaravone showed significant neurological improvement during their hospital stay, although there were no significant differences in outcome one year after the stroke. Conclusion Edaravone treatment reduced the volume of the infarct and improved neurological deficits during the subacute
Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size
Directory of Open Access Journals (Sweden)
Zhihua Wang
2014-01-01
Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.
Influence of secular trends and sample size on reference equations for lung function tests.
Quanjer, P H; Stocks, J; Cole, T J; Hall, G L; Stanojevic, S
2011-03-01
The aim of our study was to determine the contribution of secular trends and sample size to lung function reference equations, and establish the number of local subjects required to validate published reference values. 30 spirometry datasets collected between 1978 and 2009 provided data on healthy, white subjects: 19,291 males and 23,741 females aged 2.5-95 yrs. The best fit for forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC as functions of age, height and sex were derived from the entire dataset using GAMLSS. Mean z-scores were calculated for individual datasets to determine inter-centre differences. This was repeated by subdividing one large dataset (3,683 males and 4,759 females) into 36 smaller subsets (comprising 18-227 individuals) to preclude differences due to population/technique. No secular trends were observed and differences between datasets comprising >1,000 subjects were small (maximum difference in FEV(1) and FVC from overall mean: 0.30- -0.22 z-scores). Subdividing one large dataset into smaller subsets reproduced the above sample size-related differences and revealed that at least 150 males and 150 females would be necessary to validate reference values to avoid spurious differences due to sampling error. Use of local controls to validate reference equations will rarely be practical due to the numbers required. Reference equations derived from large or collated datasets are recommended.
Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.
Li, Johnson Ching-Hong
2016-12-01
In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.
DEFF Research Database (Denmark)
Shetty, Nisha; Min, Tai-Gi; Gislum, René
2011-01-01
The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub......-sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both cabbage...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...
Can reduced size of metals induce hydrogen absorption: ZrAl{sub 2} case
Energy Technology Data Exchange (ETDEWEB)
Jacob, I., E-mail: izi@bgu.ac.il [Department of Nuclear Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105 (Israel); Deledda, S. [Physics Department, Institute for Energy Technology, P.O. Box 40, NO-2027 Kjeller (Norway); Bereznitsky, M. [Department of Nuclear Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105 (Israel); Yeheskel, O. [Nuclear Research Center - Negev, P.O. Box 9001, Beer Sheva 84190 (Israel); Filipek, S.M. [Institute of Physical Chemistry, Polish Academy of Sciences, 01-224 Warsaw (Poland); Mogilyanski, D.; Kimmel, G. [Institute for Applied Research, P.O. Box 653, Ben-Gurion University of the Negev, Beer Sheva 84105 (Israel); Hauback, B.C. [Physics Department, Institute for Energy Technology, P.O. Box 40, NO-2027 Kjeller (Norway)
2011-09-15
Research highlights: > 15 nm particles of ZrAl{sub 2} and Zr(Al{sub 0.5}Co{sub 0.5}){sub 2} are obtained by attrition and cryomilling. > ZrAl{sub 2} nanoparticles remain inert to hydrogen absorption up to pressure of {approx}2 GPa. > Zr(Al{sub 0.5}Co{sub 0.5}){sub 2} nanoparticles exhibit reduced hydrogen absorption as compared to the corresponding bulk compounds. - Abstract: The hydrogen absorption ability of the non-absorbing Al-rich ZrAl{sub 2} compound was examined after reducing its particles-size to the nanometer regime. The hydrogen abstinence of bulk ZrAl{sub 2} has been previously related to its excessive elastic shear stiffening. The particle size of ZrAl{sub 2} was reduced by attrition milling and cryomilling. The minimal average particle size was estimated from powder X-ray diffraction analysis to be in the range of 10-20 nm. The hydrogen absorption of the milled compounds was measured in different hydrogenation systems at hydrogen pressures between {approx}6 MPa and {approx}2 GPa. In all the cases the hydrogen absorption was negligible. In addition, there was a reduction of the hydrogen absorption capacity of nanosized Zr(Al{sub 0.5}Co{sub 0.5}){sub 2} as compared to the corresponding bulk compound at the same conditions. We suggest, in view of our and other results, that no significant improvement of the thermodynamics (unlike the kinetics) of the hydrogen absorption can be achieved via the nanoparticle avenue.
Can reduced size of metals induce hydrogen absorption: ZrAl2 case
International Nuclear Information System (INIS)
Jacob, I.; Deledda, S.; Bereznitsky, M.; Yeheskel, O.; Filipek, S.M.; Mogilyanski, D.; Kimmel, G.; Hauback, B.C.
2011-01-01
Research highlights: → 15 nm particles of ZrAl 2 and Zr(Al 0.5 Co 0.5 ) 2 are obtained by attrition and cryomilling. → ZrAl 2 nanoparticles remain inert to hydrogen absorption up to pressure of ∼2 GPa. → Zr(Al 0.5 Co 0.5 ) 2 nanoparticles exhibit reduced hydrogen absorption as compared to the corresponding bulk compounds. - Abstract: The hydrogen absorption ability of the non-absorbing Al-rich ZrAl 2 compound was examined after reducing its particles-size to the nanometer regime. The hydrogen abstinence of bulk ZrAl 2 has been previously related to its excessive elastic shear stiffening. The particle size of ZrAl 2 was reduced by attrition milling and cryomilling. The minimal average particle size was estimated from powder X-ray diffraction analysis to be in the range of 10-20 nm. The hydrogen absorption of the milled compounds was measured in different hydrogenation systems at hydrogen pressures between ∼6 MPa and ∼2 GPa. In all the cases the hydrogen absorption was negligible. In addition, there was a reduction of the hydrogen absorption capacity of nanosized Zr(Al 0.5 Co 0.5 ) 2 as compared to the corresponding bulk compound at the same conditions. We suggest, in view of our and other results, that no significant improvement of the thermodynamics (unlike the kinetics) of the hydrogen absorption can be achieved via the nanoparticle avenue.
Grain dissection as a grain size reducing mechanism during ice microdynamics
Steinbach, Florian; Kuiper, Ernst N.; Eichler, Jan; Bons, Paul D.; Drury, Martin R.; Griera, Albert; Pennock, Gill M.; Weikusat, Ilka
2017-04-01
Ice sheets are valuable paleo-climate archives, but can lose their integrity by ice flow. An understanding of the microdynamic mechanisms controlling the flow of ice is essential when assessing climatic and environmental developments related to ice sheets and glaciers. For instance, the development of a consistent mechanistic grain size law would support larger scale ice flow models. Recent research made significant progress in numerically modelling deformation and recrystallisation mechanisms in the polycrystalline ice and ice-air aggregate (Llorens et al., 2016a,b; Steinbach et al., 2016). The numerical setup assumed grain size reduction is achieved by the progressive transformation of subgrain boundaries into new high angle grain boundaries splitting an existing grain. This mechanism is usually termed polygonisation. Analogue experiments suggested, that strain induced grain boundary migration can cause bulges to migrate through the whole of a grain separating one region of the grain from another (Jessell, 1986; Urai, 1987). This mechanism of grain dissection could provide an alternative grain size reducing mechanism, but has not yet been observed during ice microdynamics. In this contribution, we present results using an updated numerical approach allowing for grain dissection. The approach is based on coupling the full field theory crystal visco-plasticity code (VPFFT) of Lebensohn (2001) to the multi-process modelling platform Elle (Bons et al., 2008). VPFFT predicts the mechanical fields resulting from short strain increments, dynamic recrystallisation process are implemented in Elle. The novel approach includes improvements to allow for grain dissection, which was topologically impossible during earlier simulations. The simulations are supported by microstructural observations from NEEM (North Greenland Eemian Ice Drilling) ice core. Mappings of c-axis orientations using the automatic fabric analyser and full crystallographic orientations using electron
Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry
International Nuclear Information System (INIS)
Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.
1994-01-01
The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report
Reducing Data Size Inequality during Finite Element Model Separation into Superelements
Directory of Open Access Journals (Sweden)
Yu. V. Berchun
2015-01-01
Full Text Available The work considers two methods of automatic separation of final element model into super-elements to decrease computing resource demand when solving the linearly - elastic problems of solid mechanics. The first method represents an algorithm to separate a final element grid into simply connected sub-regions according to the set specific number of nodes in the super-element. The second method is based on the generation of a super-element with the set specific data size of the coefficient matrix of the system of equations of the internal nodes balance, which are eliminated during super-element transformation. Both methods are based on the theory of graphs. The data size of a matrix of coefficients is assessed on the assumption that the further solution of a task will use Holetsky’s method. Before assessment of data size, a KatkhillaMackey's (Cuthill-McKee algorithm renumbers the internal nodes of a super-element both to decrease a profile width of the appropriate matrix of the system of equations of balance and to reduce the number of nonzero elements. Test examples show work results of abovementioned methods compared in terms of inequality of generated super-element separation according to the number of nodes and data size of the coefficient matrix of the system of equations of the internal nodes balance. It is shown that the offered approach provides smaller inequality of data size of super-element matrixes, with slightly increasing inequality by the number of tops.
Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A
2013-01-01
Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
DEFF Research Database (Denmark)
Haugan, Ketil; Marcussen, Niels; Kjølbye, Anne Louise
2006-01-01
Treatment with non-selective drugs (eg, long-chain alcohols, halothane) that reduce gap junction intercellular communication (GJIC) is associated with reduced infarct size after myocardial infarction (MI). Therefore, it has been suggested that gap junction intercellular communication stimulating ...
Self-navigation of a scanning tunneling microscope tip toward a micron-sized graphene sample.
Li, Guohong; Luican, Adina; Andrei, Eva Y
2011-07-01
We demonstrate a simple capacitance-based method to quickly and efficiently locate micron-sized conductive samples, such as graphene flakes, on insulating substrates in a scanning tunneling microscope (STM). By using edge recognition, the method is designed to locate and to identify small features when the STM tip is far above the surface, allowing for crash-free search and navigation. The method can be implemented in any STM environment, even at low temperatures and in strong magnetic field, with minimal or no hardware modifications.
Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples
International Nuclear Information System (INIS)
Lisboa-Filho, P N; Deimling, C V; Ortiz, W A
2010-01-01
In this contribution superconducting specimens of YBa 2 Cu 3 O 7-δ were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.
Clustering for high-dimension, low-sample size data using distance vectors
Terada, Yoshikazu
2013-01-01
In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...
Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples
Energy Technology Data Exchange (ETDEWEB)
Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)
2010-01-15
In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.
Park, Seung-Min; Huh, Yun Suk; Szeto, Kylan; Joe, Daniel J; Kameoka, Jun; Coates, Geoffrey W; Edel, Joshua B; Erickson, David; Craighead, Harold G
2010-11-05
Biomolecular transport in nanofluidic confinement offers various means to investigate the behavior of biomolecules in their native aqueous environments, and to develop tools for diverse single-molecule manipulations. Recently, a number of simple nanofluidic fabrication techniques has been demonstrated that utilize electrospun nanofibers as a backbone structure. These techniques are limited by the arbitrary dimension of the resulting nanochannels due to the random nature of electrospinning. Here, a new method for fabricating nanofluidic systems from size-reduced electrospun nanofibers is reported and demonstrated. As it is demonstrated, this method uses the scanned electrospinning technique for generation of oriented sacrificial nanofibers and exposes these nanofibers to harsh, but isotropic etching/heating environments to reduce their cross-sectional dimension. The creation of various nanofluidic systems as small as 20 nm is demonstrated, and practical examples of single biomolecular handling, such as DNA elongation in nanochannels and fluorescence correlation spectroscopic analysis of biomolecules passing through nanochannels, are provided.
Larkin, Robert M; Stefano, Giovanni; Ruckle, Michael E; Stavoe, Andrea K; Sinkler, Christopher A; Brandizzi, Federica; Malmstrom, Carolyn M; Osteryoung, Katherine W
2016-02-23
Eukaryotic cells require mechanisms to establish the proportion of cellular volume devoted to particular organelles. These mechanisms are poorly understood. From a screen for plastid-to-nucleus signaling mutants in Arabidopsis thaliana, we cloned a mutant allele of a gene that encodes a protein of unknown function that is homologous to two other Arabidopsis genes of unknown function and to FRIENDLY, which was previously shown to promote the normal distribution of mitochondria in Arabidopsis. In contrast to FRIENDLY, these three homologs of FRIENDLY are found only in photosynthetic organisms. Based on these data, we proposed that FRIENDLY expanded into a small gene family to help regulate the energy metabolism of cells that contain both mitochondria and chloroplasts. Indeed, we found that knocking out these genes caused a number of chloroplast phenotypes, including a reduction in the proportion of cellular volume devoted to chloroplasts to 50% of wild type. Thus, we refer to these genes as REDUCED CHLOROPLAST COVERAGE (REC). The size of the chloroplast compartment was reduced most in rec1 mutants. The REC1 protein accumulated in the cytosol and the nucleus. REC1 was excluded from the nucleus when plants were treated with amitrole, which inhibits cell expansion and chloroplast function. We conclude that REC1 is an extraplastidic protein that helps to establish the size of the chloroplast compartment, and that signals derived from cell expansion or chloroplasts may regulate REC1.
Marchiori, David; Papies, Esther K
2014-04-01
The present research examined the effects of a mindfulness-based intervention to foster healthy eating. Specifically, we tested whether a brief mindfulness manipulation can prevent the portion size effect, and reduce overeating on unhealthy snacks when hungry. 110 undergraduate participants (MAge=20.9±2.3; MBMI=22.3±2.5) were served a small or a large portion of chocolate chip cookies after listening to an audio book or performing a mindfulness exercise (i.e., body scan). Current level of hunger was assessed unobtrusively on a visual analog scale before the eating situation. Calorie intake from chocolate chip cookies. When presented with a large compared to a small portion, participants consumed more cookies (+83kcal). This was not affected by the mindfulness intervention or by hunger. However, while control participants ate more unhealthy food when hungry than when not hungry (+67kcal), participants in the mindfulness condition did not (+1kcal). Findings confirm the prevalence and robustness of the portion size effect and suggest that it may be independent from awareness of internal cues. Prevention strategies may benefit more from targeting awareness of the external environment. However, mindfulness-based interventions may be effective to reduce effects of hunger on unhealthy food consumption. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimated ventricle size using Evans index: reference values from a population-based sample.
Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C
2017-03-01
Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.
Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden
Energy Technology Data Exchange (ETDEWEB)
Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)
2008-12-15
The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.
Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden
International Nuclear Information System (INIS)
Wagner, Annemarie; Boman, Johan; Gatari, Michael J.
2008-01-01
The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers
Dependence of fracture mechanical and fluid flow properties on fracture roughness and sample size
International Nuclear Information System (INIS)
Tsang, Y.W.; Witherspoon, P.A.
1983-01-01
A parameter study has been carried out to investigate the interdependence of mechanical and fluid flow properties of fractures with fracture roughness and sample size. A rough fracture can be defined mathematically in terms of its aperture density distribution. Correlations were found between the shapes of the aperture density distribution function and the specific fractures of the stress-strain behavior and fluid flow characteristics. Well-matched fractures had peaked aperture distributions that resulted in very nonlinear stress-strain behavior. With an increasing degree of mismatching between the top and bottom of a fracture, the aperture density distribution broadened and the nonlinearity of the stress-strain behavior became less accentuated. The different aperture density distributions also gave rise to qualitatively different fluid flow behavior. Findings from this investigation make it possible to estimate the stress-strain and fluid flow behavior when the roughness characteristics of the fracture are known and, conversely, to estimate the fracture roughness from an examination of the hydraulic and mechanical data. Results from this study showed that both the mechanical and hydraulic properties of the fracture are controlled by the large-scale roughness of the joint surface. This suggests that when the stress-flow behavior of a fracture is being investigated, the size of the rock sample should be larger than the typical wave length of the roughness undulations
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Performance of Disk Mill Type Mechanical Grinder for Size Reducing Process of Robusta Roasted Beans
Directory of Open Access Journals (Sweden)
Sri Mulato
2006-12-01
Full Text Available One of improtant steps in secondary coffee processing that influence on final product quality such as consistency and uniformity is milling process. Usually, Indonesian smallholder used "lumpang" for milling coffee roasted beans to coffee powder product which caused the final product not uniformed and consistent, and low productivity. Milling process of coffee roasted beans can be done by disk mill type mechanical grinder which is used by smallholder for milling several cereals. Indonesian Coffee and Cocoa Research Institute have developed disk mill type grinding machine for milling coffee roasted beans. Objective of this research is to find performance of disk mill type grinding machine for size reducing process of Robusta roasted beans from several size dried beans and roasting level treatments. Robusta dried beans which are taken from dry processing method have 13—14% moisture content (wet basis, 680—685 kg/m3 density, and classified in 3 sizes level. The result showed that the disk mill type of grinding machine could be used for milling Robusta roasted beans. Machine hascapacity 31—54 kg/h on 5,310—5,610 rpm axle rotation and depend on roasting level. Other technical parameters were 91—98% process efficientcy, 19—31 ml/ kg fuel consumption, 0.3—1% slips, 50—55% particles had diameter less than 230 mesh and 38—44% particles had diameter bigger than 100 mesh, 32—38% lightness was increased, 0.6—12.6% density was decreased, and solubility of coffee powder between 28—30%. Cost milling process per kilogram of Robusta roasted beans which light roast on capacity 30 kg/hour was Rp362.9. Key words : Coffee roasted, Robusta, disk mill, mechanical grinder, size reduction.
Shieh, Gwowen; Jan, Show-Li
2013-01-01
The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…
Dry paths effectively reduce road mortality of small and medium-sized terrestrial vertebrates.
Niemi, Milla; Jääskeläinen, Niina C; Nummi, Petri; Mäkelä, Tiina; Norrdahl, Kai
2014-11-01
Wildlife passages are widely used mitigation measures designed to reduce the adverse impacts of roads on animals. We investigated whether road kills of small and medium-sized terrestrial vertebrates can be reduced by constructing dry paths adjacent to streams that pass under road bridges. The study was carried out in southern Finland during the summer of 2008. We selected ten road bridges with dry paths and ten bridges without them, and an individual dry land reference site for each study bridge on the basis of landscape and traffic features. A total of 307 dead terrestrial vertebrates were identified during the ten-week study period. The presence of dry paths decreased the amount of road-killed terrestrial vertebrates (Poisson GLMM; p road-kills on mammals was not such clear. In the mammal model, a lack of dry paths increased the amount of carcasses (p = 0.001) whereas the number of casualties at dry path bridges was comparable with dry land reference sites. A direct comparison of the dead ratios suggests an average efficiency of 79% for the dry paths. When considering amphibians and mammals alone, the computed effectiveness was 88 and 70%, respectively. Our results demonstrate that dry paths under road bridges can effectively reduce road-kills of small and medium-sized terrestrial vertebrates, even without guiding fences. Dry paths seemed to especially benefit amphibians which are a threatened species group worldwide and known to suffer high traffic mortality. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Esterhuyse Adriaan J
2010-06-01
Full Text Available Abstract Background and Aims Recent studies have shown that dietary red palm oil (RPO supplementation improves functional recovery following ischaemia/reperfusion in isolated hearts. The main aim of this study was to investigate the effects of dietary RPO supplementation on myocardial infarct size after ischaemia/reperfusion injury. The effects of dietary RPO supplementation on matrix metalloproteinase-2 (MMP2 activation and PKB/Akt phosphorylation were also investigated. Materials and methods Male Wistar rats were divided into three groups and fed a standard rat chow diet (SRC, a SRC supplemented with RPO, or a SRC supplemented with sunflower oil (SFO, for a five week period, respectively. After the feeding period, hearts were excised and perfused on a Langendorff perfusion apparatus. Hearts were subjected to thirty minutes of normothermic global ischaemia and two hours of reperfusion. Infarct size was determined by triphenyltetrazolium chloride staining. Coronary effluent was collected for the first ten minutes of reperfusion in order to measure MMP2 activity by gelatin zymography. Results Dietary RPO-supplementation decreased myocardial infarct size significantly when compared to the SRC-group and the SFO-supplemented group (9.1 ± 1.0% versus 30.2 ± 3.9% and 27.1 ± 2.4% respectively. Both dietary RPO- and SFO-supplementation were able to decrease MMP2 activity when compared to the SRC fed group. PKB/Akt phosphorylation (Thr 308 was found to be significantly higher in the dietary RPO supplemented group when compared to the SFO supplemented group at 10 minutes into reperfusion. There was, however, no significant changes observed in ERK phosphorylation. Conclusions Dietary RPO-supplementation was found to be more effective than SFO-supplementation in reducing myocardial infarct size after ischaemia/reperfusion injury. Both dietary RPO and SFO were able to reduce MMP2 activity, which suggests that MMP2 activity does not play a major role in
Assessment of bone biopsy needles for sample size, specimen quality and ease of use
International Nuclear Information System (INIS)
Roberts, C.C.; Liu, P.T.; Morrison, W.B.; Leslie, K.O.; Carrino, J.A.; Lozevski, J.L.
2005-01-01
To assess whether there are significant differences in ease of use and quality of samples among several bone biopsy needles currently available. Eight commonly used, commercially available bone biopsy needles of different gauges were evaluated. Each needle was used to obtain five consecutive samples from a lamb lumbar pedicle. Subjective assessment of ease of needle use, ease of sample removal from the needle and sample quality, before and after fixation, was graded on a 5-point scale. The number of attempts necessary to reach a 1 cm depth was recorded. Each biopsy specimen was measured in the gross state and after fixation. The RADI Bonopty 15 g and Kendall Monoject J-type 11 g needles were rated the easiest to use, while the Parallax Core-Assure 11 g and the Bard Ostycut 16 g were rated the most difficult. Parallax Core-Assure and Kendall Monoject needles had the highest quality specimen in the gross state; Cook Elson/Ackerman 14 g and Bard Ostycut 16 g needles yielded the lowest. The MD Tech without Trap-Lok 11 g needle had the highest quality core after fixation, while the Bard Ostycut 16 g had the lowest. There was a significant difference in pre-fixation sample length between needles (P<0.0001), despite acquiring all cores to a standard 1 cm depth. Core length and width decrease in size by an average of 28% and 42% after fixation. Bone biopsy needles vary significantly in performance. Detailed knowledge of the strengths and weaknesses of different needles is important to make an appropriate selection for each individual's practice. (orig.)
Directory of Open Access Journals (Sweden)
Maranhão RC
2017-05-01
Full Text Available Raul C Maranhão,1,2 Maria C Guido,1 Aline D de Lima,1 Elaine R Tavares,1 Alyne F Marques,1 Marcelo D Tavares de Melo,3 Jose C Nicolau,3 Vera MC Salemi,3 Roberto Kalil-Filho3 1Laboratory of Metabolism and Lipids, 2Faculty of Pharmaceutical Sciences, 3Heart Failure Unit, Clinical Cardiology Division, Heart Institute (InCor, Medical School Hospital, University of São Paulo, São Paulo, Brazil Purpose: Acute myocardial infarction (MI is accompanied by myocardial inflammation, fibrosis, and ventricular remodeling that, when excessive or not properly regulated, may lead to heart failure. Previously, lipid core nanoparticles (LDE used as carriers of the anti-inflammatory drug methotrexate (MTX produced an 80-fold increase in the cell uptake of MTX. LDE-MTX treatment reduced vessel inflammation and atheromatous lesions induced in rabbits by cholesterol feeding. The aim of the study was to investigate the effects of LDE-MTX on rats with MI, compared with commercial MTX treatment.Materials and methods: Thirty-eight Wistar rats underwent left coronary artery ligation and were treated with LDE-MTX, or with MTX (1 mg/kg intraperitoneally, once/week, starting 24 hours after surgery or with LDE without drug (MI-controls. A sham-surgery group (n=12 was also included. Echocardiography was performed 24 hours and 6 weeks after surgery. The animals were euthanized and their hearts were analyzed for morphometry, protein expression, and confocal microscopy.Results: LDE-MTX treatment achieved a 40% improvement in left ventricular (LV systolic function and reduced cardiac dilation and LV mass, as shown by echocardiography. LDE-MTX reduced the infarction size, myocyte hypertrophy and necrosis, number of inflammatory cells, and myocardial fibrosis, as shown by morphometric analysis. LDE-MTX increased antioxidant enzymes; decreased apoptosis, macrophages, reactive oxygen species production; and tissue hypoxia in non-infarcted myocardium. LDE-MTX increased adenosine
Performances of Different Fragment Sizes for Reduced Representation Bisulfite Sequencing in Pigs
DEFF Research Database (Denmark)
Yuan, Xiao Long; Zhang, Zhe; Pan, Rong Yang
2017-01-01
sizes might decrease when the dataset size was more than 70, 50 and 110 million reads for these three fragment sizes, respectively. Given a 50-million dataset size, the average sequencing depth of the detected CpG sites in the 110-220 bp fragment size appeared to be deeper than in the 40-110 bp and 40...
Effects of sample size on estimation of rainfall extremes at high temperatures
Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik
2017-09-01
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Effects of sample size on estimation of rainfall extremes at high temperatures
Directory of Open Access Journals (Sweden)
B. Boessenkool
2017-09-01
Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
What about N? A methodological study of sample-size reporting in focus group studies.
Carlsen, Benedicte; Glenton, Claire
2011-03-11
Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these
What about N? A methodological study of sample-size reporting in focus group studies
Directory of Open Access Journals (Sweden)
Glenton Claire
2011-03-01
Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method
Directory of Open Access Journals (Sweden)
Pitchaiah Mandava
Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We
Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples
Directory of Open Access Journals (Sweden)
Hyunok Oh
2003-05-01
Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.
Reduced size liver transplantation from a donor supported by a Berlin Heart.
Misra, M V; Smithers, C J; Krawczuk, L E; Jenkins, R L; Linden, B C; Weldon, C B; Kim, H B
2009-11-01
Patients on cardiac assist devices are often considered to be high-risk solid organ donors. We report the first case of a reduced size liver transplant performed using the left lateral segment of a pediatric donor whose cardiac function was supported by a Berlin Heart. The recipient was a 22-day-old boy with neonatal hemochromatosis who developed fulminant liver failure shortly after birth. The transplant was complicated by mild delayed graft function, which required delayed biliary reconstruction and abdominal wall closure, as well as a bile leak. However, the graft function improved quickly over the first week and the patient was discharged home with normal liver function 8 weeks after transplant. The presence of a cardiac assist device should not be considered an absolute contraindication for abdominal organ donation. Normal organ procurement procedures may require alteration due to the unusual technical obstacles that are encountered when the donor has a cardiac assist device.
Maurer, Sara; Giess, Mario; Koch, Oliver; Summerer, Daniel
2016-12-16
Transcription-activator-like effector (TALE) proteins consist of concatenated repeats that recognize consecutive canonical nucleobases of DNA via the major groove in a programmable fashion. Since this groove displays unique chemical information for the four human epigenetic cytosine nucleobases, TALE repeats with epigenetic selectivity can be engineered, with potential to establish receptors for the programmable decoding of all human nucleobases. TALE repeats recognize nucleobases via key amino acids in a structurally conserved loop whose backbone is positioned very close to the cytosine 5-carbon. This complicates the engineering of selectivities for large 5-substituents. To interrogate a more promising structural space, we engineered size-reduced repeat loops, performed saturation mutagenesis of key positions, and screened a total of 200 repeat-nucleobase interactions for new selectivities. This provided insight into the structural requirements of TALE repeats for affinity and selectivity, revealed repeats with improved or relaxed selectivity, and resulted in the first selective sensor of 5-carboxylcytosine.
Proton pump inhibitors reduce the size and acidity of the acid pocket in the stomach.
Rohof, Wout O; Bennink, Roelof J; Boeckxstaens, Guy E
2014-07-01
The gastric acid pocket is believed to be the reservoir from which acid reflux events originate. Little is known about how changes in position, size, and acidity of the acid pocket contribute to the therapeutic effect of proton pump inhibitors (PPIs) in patients with gastroesophageal reflux disease (GERD). Thirty-six patients with GERD (18 not taking PPIs, 18 taking PPIs; 19 men; age, 55 ± 2.1 y) were analyzed by concurrent high-resolution manometry and pH-impedance monitoring after a standardized meal. The acid pocket was visualized using scintigraphy after intravenous administration of (99m)technetium-pertechnetate. The size of the acid pocket was measured and its position was determined, relative to the diaphragm, using radionuclide markers on a high-resolution manometry catheter. At the end of the study, the acid pocket was aspirated, and its pH level was measured. The number of reflux episodes was comparable between patients on and off PPIs, but the number of acid reflux episodes was reduced significantly in patients on PPIs. In patients on PPIs, the acid pocket was smaller and more frequently located below the diaphragm. The mean pH of the acid pocket was significantly lower in patients not taking PPIs (n = 6) than in those who were (n = 16) (0.9; range, 0.7-1.2 vs 4.0; range, 1.6-5.9; P pH of acid pockets correlated significantly with the lowest pH values measured for refluxate (r = 0.72; P < .01). Based on analyses of acid pockets in patients with GERD, the acid pocket appears to be a reservoir from which reflux occurs when patients are receiving PPIs. PPIs might affect the size, acidity, or position of the acid pocket, which contributes to the efficacy in patients with GERD. Copyright © 2014 AGA Institute. Published by Elsevier Inc. All rights reserved.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Abdalla, M. A.; Choudhary, D. Kumar; Chaudhary, R. Kumar
2018-02-01
This paper presents the design of two reduced size dual-band metamaterial bandpass filters and its simulation followed by measurements of proposed filters. These filters are supporting different frequency bands and primarily could be utilize in radio frequency identification (RFID) application. The filter includes three cells in which two are symmetrical and both inductively coupled with the third cell which is present in between them. In the proposed designs, three different metamaterial composite right/left handed (CRLH) cell resonators have been analysed for compactness. The CRLH cell consists of an interdigital capacitor, a stub/meander line/spiral inductor and a via to connect the top of the structure and ground plane. Finally, the proposed dual band bandpass filters (using meander line and spiral inductor) are showing size reduction by 65% and 50% (with 25% operating frequency reduction), respectively, in comparison with reference filter using stub inductor. More than 30 dB attenuation has been achieved between the two passbands.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sample size clay kaolin of primary in pegmatites regions Junco Serido - PB and Equador - RN
International Nuclear Information System (INIS)
Meyer, M.F.; Sousa, J.B.M.; Sales, L.R.; Silva, P.A.S.; Lima, A.D.D.
2016-01-01
Kaolin is a clay formed mainly of kaolinite resulting from feldspar weathering or hydrothermal. This study aims to investigate the way of occurrence, kaolin particle size of the pegmatites of the Borborema Province Pegmatitic in the regions of Junco do Serido-PB and Ecuador-RN. These variables were analyzed considering granulometric intervals obtained from wet sieving of samples of pegmatite mines in the region. Kaolin was received using sieves of 200, 325, 400 and 500 mesh and the sieve fractions retained by generating statistical parameters histograms. kaolin particles are extremely fine and pass in its entirety through 500 mesh sieve. The characterization of minerals in fine fractions by diffraction of X-rays showed that the relative amount of sericite in fractions retained in sieves 400 and 500 mesh impairing the whiteness and mineralogical texture kaolin production. (author)
The Macdonald and Savage titrimetric procedure scaled down to 4 mg sized plutonium samples. P. 1
International Nuclear Information System (INIS)
Kuvik, V.; Lecouteux, C.; Doubek, N.; Ronesch, K.; Jammet, G.; Bagliano, G.; Deron, S.
1992-01-01
The original Macdonald and Savage amperometric method scaled down to milligram-sized plutonium samples was further modified. The electro-chemical process of each redox step and the end-point of the final titration were monitored potentiometrically. The method is designed to determine 4 mg of plutonium dissolved in nitric acid solution. It is suitable for the direct determination of plutonium in non-irradiated fuel with a uranium-to-plutonium ratio of up to 30. The precision and accuracy are ca. 0.05-0.1% (relative standard deviation). Although the procedure is very selective, the following species interfere: vanadyl(IV) and vanadate (almost quantitatively), neptunium (one electron exchange per mole), nitrites, fluorosilicates (milligram amounts yield a slight bias) and iodates. (author). 15 refs.; 8 figs.; 7 tabs
Dental arch dimensions, form and tooth size ratio among a Saudi sample
Directory of Open Access Journals (Sweden)
Haidi Omar
2018-01-01
Full Text Available Objectives: To determine the dental arch dimensions and arch forms in a sample of Saudi orthodontic patients, to investigate the prevalence of Bolton anterior and overall tooth size discrepancies, and to compare the effect of gender on the measured parameters. Methods: This study is a biometric analysis of dental casts of 149 young adults recruited from different orthodontic centers in Jeddah, Saudi Arabia. The dental arch dimensions were measured. The measured parameters were arch length, arch width, Bolton’s ratio, and arch form. The data were analyzed using IBM SPSS software version 22.0 (IBM Corporation, New York, USA; this cross-sectional study was conducted between April 2015 and May 2016. Results: Dental arch measurements, including inter-canine and inter-molar distance, were found to be significantly greater in males than females (p less than 0.05. The most prevalent dental arch forms were narrow tapered (50.3% and narrow ovoid (34.2%, respectively. The prevalence of tooth size discrepancy in all cases was 43.6% for anterior ratio and 24.8% for overall ratio. The mean Bolton’s anterior ratio in all malocclusion classes was 79.81%, whereas the mean Bolton’s overall ratio was 92.21%. There was no significant difference between males and females regarding Bolton’s ratio. Conclusion: The most prevalent arch form was narrow tapered, followed by narrow ovoid. Males generally had larger dental arch measurements than females, and the prevalence of tooth size discrepancy was more in Bolton’s anterior teeth ratio than in overall ratio.
Mori, T.; Moteki, N.; Ohata, S.; Koike, M.; Azuma, K. G.; Miyazaki, Y.; Kondo, Y.
2015-12-01
Black carbon (BC) is the strongest contributor to sunlight absorption among atmospheric aerosols. Quantitative understanding of wet deposition of BC, which strongly affects the spatial distribution of BC, is important to improve our understandings on climate change. We have devised a technique for measuring the masses of individual BC particles in rainwater and snow samples, as a combination of a nebulizer and a single-particle soot photometer (SP2) (Ohata et al. 2011, 2013; Schwarz et al. 2012; Mori et al. 2014). We show two important improvements in this technique: 1)We have extended the upper limit of detectable BC particle diameter from 0.9 μm to about 4.0 μm by modifying the photodetector for measuring the laser-induced incandescence signal. 2)We introduced a pneumatic nebulizer Marin-5 (Cetac Technologies Inc., Omaha, NE, USA) and experimentally confirmed its high extraction efficiency (~50%) independent of particle diameter up to 2.0 μm. Using our improved system, we simultaneously measured the size distribution of BC particles in air and rainwater in Tokyo. We observed that the size distribution of BC in rainwater was larger than that in air, indicating that large BC particles were effectively removed by precipitation. We also observed BC particles with diameters larger than 1.0 μm, indicating that further studies of wet deposition of BC will require the use of the modified SP2.
Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M
2013-02-01
Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Haugboel, Steven; Pinborg, Lars H.; Arfan, Haroon M.; Froekjaer, Vibe M.; Svarer, Claus; Knudsen, Gitte M.; Madsen, Jacob; Dyrby, Tim B.
2007-01-01
To determine the reproducibility of measurements of brain 5-HT 2A receptors with an [ 18 F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects reproducibility and the required sample size. For assessment of the variability, six subjects were investigated with [ 18 F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [ 18 F]altanserin PET studies in 84 healthy subjects. Regions of interest were automatically delineated on co-registered MR and PET images. In cortical brain regions with a high density of 5-HT 2A receptors, the outcome parameter (binding potential, BP 1 ) showed high reproducibility, with a median difference between the two group measurements of 6% (range 5-12%), whereas in regions with a low receptor density, BP 1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. This study demonstrates that [ 18 F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT 2A receptor density. Moreover, partial volume correction considerably reduces the sample size required to detect regional changes between groups. (orig.)
Directory of Open Access Journals (Sweden)
Shaukat S. Shahid
2016-06-01
Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.
Weighted piecewise LDA for solving the small sample size problem in face verification.
Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis
2007-03-01
A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.
Fan, Chunpeng; Zhang, Donghui
2012-01-01
Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.
Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples
Directory of Open Access Journals (Sweden)
Inés Lozano-Ramos
2015-05-01
Full Text Available Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9. The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm–Horsfall protein were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.
Gase, Lauren; Dunning, Lauren; Kuo, Tony; Simon, Paul; Fielding, Jonathan E
2014-03-20
Reducing the portion size of food and beverages served at restaurants has emerged as a strategy for addressing the obesity epidemic; however, barriers and facilitators to achieving this goal are not well characterized. In fall 2012, the Los Angeles County Department of Public Health conducted semistructured interviews with restaurant owners to better understand contextual factors that may impede or facilitate participation in a voluntary program to recognize restaurants for offering reduced-size portions. Interviews were completed with 18 restaurant owners (representing nearly 350 restaurants). Analyses of qualitative data revealed 6 themes related to portion size: 1) perceived customer demand is central to menu planning; 2) multiple portion sizes are already being offered for at least some food items; 3) numerous logistical barriers exist for offering reduced-size portions; 4) restaurant owners have concerns about potential revenue losses from offering reduced-size portions; 5) healthful eating is the responsibility of the customer; and 6) a few owners want to be socially responsible industry leaders. A program to recognize restaurants for offering reduced-size portions may be a feasible approach in Los Angeles County. These findings may have applications for jurisdictions interested in engaging restaurants as partners in reducing the obesity epidemic.
Dunning, Lauren; Kuo, Tony; Simon, Paul; Fielding, Jonathan E.
2014-01-01
Introduction Reducing the portion size of food and beverages served at restaurants has emerged as a strategy for addressing the obesity epidemic; however, barriers and facilitators to achieving this goal are not well characterized. Methods In fall 2012, the Los Angeles County Department of Public Health conducted semistructured interviews with restaurant owners to better understand contextual factors that may impede or facilitate participation in a voluntary program to recognize restaurants for offering reduced-size portions. Results Interviews were completed with 18 restaurant owners (representing nearly 350 restaurants). Analyses of qualitative data revealed 6 themes related to portion size: 1) perceived customer demand is central to menu planning; 2) multiple portion sizes are already being offered for at least some food items; 3) numerous logistical barriers exist for offering reduced-size portions; 4) restaurant owners have concerns about potential revenue losses from offering reduced-size portions; 5) healthful eating is the responsibility of the customer; and 6) a few owners want to be socially responsible industry leaders. Conclusion A program to recognize restaurants for offering reduced-size portions may be a feasible approach in Los Angeles County. These findings may have applications for jurisdictions interested in engaging restaurants as partners in reducing the obesity epidemic. PMID:24650622
Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold
2016-04-25
To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Lexis, Chris P. H.; Wieringa, Wouter G.; Hiemstra, Bart; van Deursen, Vincent M.; Lipsic, Erik; van der Harst, Pim; van Veldhuisen, Dirk J.; van der Horst, Iwan C. C.
Increased myocardial infarct (MI) size is associated with higher risk of developing left ventricular dysfunction, heart failure and mortality. Experimental studies have suggested that metformin treatment reduces MI size after induced ischaemia but human data is lacking. We aimed to investigate the
From micro- to nanomagnetic dots: evolution of the eigenmode spectrum on reducing the lateral size
International Nuclear Information System (INIS)
Carlotti, G; Madami, M; Gubbiotti, G; Tacchi, S; Hartmann, F; Emmerling, M; Kamp, M; Worschech, L
2014-01-01
Brillouin light scattering experiments and micromagnetic simulations have been exploited to investigate the spectrum of thermally excited magnetic eigenmodes in 10 nm-thick elliptical Permalloy dots, when the longer axis D is scaled down from about 1000 to 100 nm. It is shown that for D larger than about 200 nm the characteristics of the spin-wave eigenmodes are dominated by dipolar energy, while for D in the range of about 100 to 200 nm exchange energy effects cause qualitative and quantitative differences in the spin-wave spectrum. In this ‘mesoscopic’ regime, the usual classification scheme, involving one fundamental mode with large average magnetization and many other modes collected in families with specific symmetries, no longer holds. Rather, one finds the simultaneous presence of two modes with ‘fundamental’ character, i.e. with a significant and comparable value of the average dynamical magnetization: the former is at larger frequency and has its maximum amplitude at the dot's centre, while the latter occurs at lower frequency and is localized at the dot's edges. Interestingly, the maximum intensity swaps from the higher frequency mode to the lower frequency one, just when the dot size is reduced from about 200 to 100 nm. This is relevant in view of the exploitation of nanodots for the design of nanomagnetic devices with lateral dimensions in the above interval, such as memory cells, logic gates, reading heads and spin-torque oscillators. (paper)
The reduced local lymph node assay: the impact of group size.
Ryan, Cindy A; Chaney, Joel G; Kern, Petra S; Patlewicz, Grace Y; Basketter, David A; Betts, Catherine J; Dearman, Rebecca J; Kimber, Ian; Gerberick, G Frank
2008-05-01
The local lymph node assay (LLNA) is a skin sensitization test that provides animal welfare benefits. To reduce animal usage further, a modified version (rLLNA) was proposed. Conducting the rLLNA as a screening test with a single high dose group and vehicle control differentiated accurately between skin sensitizers and non-sensitizers. This study examined whether a reduction in animal number/group is feasible. Historical data were utilized to examine the impact of conducting the rLLNA with two mice/group. To assess the effect on the stimulation index (SI) 41 datasets with individual animal data derived using five mice/group were analysed. SIs were calculated on all possible combinations of two control and two high dose group disintegrations per minute (dpm) values. For 25 of 33 sensitizer datasets, > 96% of possible dpm combinations resulted in a calculated SI > 3. The lowest percentages of positive SIs were observed with weak allergens when, in the standard LLNA, the mean SIs would have been nearer to the threshold value of 3. The results indicate that moderate, strong and extreme allergens are more likely than weak allergens to be identified as sensitizers when group sizes of two mice are used within the rLLNA. It is concluded that a rLLNA with two mice/group would display decreased sensitivity and is inappropriate for use in hazard identification. Copyright (c) 2007 John Wiley & Sons, Ltd.
From micro- to nanomagnetic dots: evolution of the eigenmode spectrum on reducing the lateral size
Carlotti, G.; Gubbiotti, G.; Madami, M.; Tacchi, S.; Hartmann, F.; Emmerling, M.; Kamp, M.; Worschech, L.
2014-07-01
Brillouin light scattering experiments and micromagnetic simulations have been exploited to investigate the spectrum of thermally excited magnetic eigenmodes in 10 nm-thick elliptical Permalloy dots, when the longer axis D is scaled down from about 1000 to 100 nm. It is shown that for D larger than about 200 nm the characteristics of the spin-wave eigenmodes are dominated by dipolar energy, while for D in the range of about 100 to 200 nm exchange energy effects cause qualitative and quantitative differences in the spin-wave spectrum. In this ‘mesoscopic’ regime, the usual classification scheme, involving one fundamental mode with large average magnetization and many other modes collected in families with specific symmetries, no longer holds. Rather, one finds the simultaneous presence of two modes with ‘fundamental’ character, i.e. with a significant and comparable value of the average dynamical magnetization: the former is at larger frequency and has its maximum amplitude at the dot's centre, while the latter occurs at lower frequency and is localized at the dot's edges. Interestingly, the maximum intensity swaps from the higher frequency mode to the lower frequency one, just when the dot size is reduced from about 200 to 100 nm. This is relevant in view of the exploitation of nanodots for the design of nanomagnetic devices with lateral dimensions in the above interval, such as memory cells, logic gates, reading heads and spin-torque oscillators.
Increased myocardial infarct size because of reduced coronary collateral blood flow in beagles
International Nuclear Information System (INIS)
Uemura, N.; Knight, D.R.; Shen, Y.T.; Nejima, J.; Cohen, M.V.; Thomas, J.X. Jr.; Vatner, S.F.
1989-01-01
Effects of permanent left circumflex coronary artery occlusion (CAO) were examined in conscious purebred beagles and mongrel dogs, instrumented with miniature left ventricular (LV) pressure gauges, wall thickness gauges in the ischemic zone, catheters in left atrium and aorta, and snares around the left circumflex coronary artery. Blood flow was measured using the radioactive microsphere technique before CAO and at 5 min, 1, 3, and 24 h after CAO. Although CAO reduced myocardial blood flow similarly in beagles and mongrels, significantly less (P less than 0.05) recovery of myocardial blood flow was observed over the following 24-h period in beagles. Infarct size, as determined by triphenyltetrazolium chloride and expressed as percentage of area at risk, was larger (P less than 0.05) in beagles (62.0 ± 5.1%) than mongrels (42.5 ± 4.2%). Thus beagles do not tolerate ischemia as well as mongrel dogs and possess fewer functional coronary collaterals resulting in larger infarcts after CAO
Olneck, Michael R.; Bills, David B.
1979-01-01
Birth order effects in brothers were found to derive from difference in family size. Effects for family size were found even with socioeconomic background controlled. Nor were family size effects explained by parental ability. The importance of unmeasured preferences or economic resources that vary across families was suggested. (Author/RD)
Optimum strata boundaries and sample sizes in health surveys using auxiliary variables.
Reddy, Karuna Garan; Khan, Mohammad G M; Khan, Sabiha
2018-01-01
Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results.
Directory of Open Access Journals (Sweden)
Elsa Tavernier
Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S
2016-01-01
In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.
Directory of Open Access Journals (Sweden)
William Peterman
2016-03-01
Full Text Available In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (FST and DC and isolation-by-distance (IBD among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using DC, the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.
Lewis, Hannah B; Ahern, Amy L; Solis-Trapala, Ivonne; Walker, Celia G; Reimann, Frank; Gribble, Fiona M; Jebb, Susan A
2015-07-01
Larger portion sizes (PS) are associated with greater energy intake (EI), but little evidence exists on the appetitive effects of PS reduction. This study investigated the impact of reducing breakfast PS on subsequent EI, postprandial gastrointestinal hormone responses, and appetite ratings. In a randomized crossover design (n = 33 adults; mean BMI 29 kg/m(2) ), a compulsory breakfast was based on 25% of gender-specific estimated daily energy requirements; PS was reduced by 20% and 40%. EI was measured at an ad libitum lunch (240 min) and snack (360 min) and by weighed diet diaries until bed. Blood was sampled until lunch in 20 participants. Appetite ratings were measured using visual analogue scales. EI at lunch (control: 2,930 ± 203; 20% reduction: 2,853 ± 198; 40% reduction: 2,911 ± 179 kJ) and over the whole day except breakfast (control: 7,374 ± 361; 20% reduction: 7,566 ± 468; 40% reduction: 7,413 ± 417 kJ) did not differ. Postprandial PYY, GLP-1, GIP, insulin, and fullness profiles were lower and hunger, desire to eat, and prospective consumption higher following 40% reduction compared to control. Appetite ratings profiles, but not hormone concentrations, were associated with subsequent EI. Smaller portions at breakfast led to reductions in gastrointestinal hormone secretion but did not affect subsequent energy intake, suggesting small reductions in portion size may be a useful strategy to constrain EI. © 2015 The Obesity Society.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-01-01
Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...
DEFF Research Database (Denmark)
Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin
2017-01-01
Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...
Alternaria and Fusarium in Norwegian grains of reduced quality - a matched pair sample study
DEFF Research Database (Denmark)
Kosiak, B.; Torp, M.; Skjerve, E.
2004-01-01
The occurrence and geographic distribution of species belonging to the genera Alternaria and Fusarium in grains of reduced and of acceptable quality were studied post-harvest in 1997 and 1998. A total of 260 grain samples of wheat, barley and oats was analysed. The distribution of Alternaria and ...
Career satisfaction and retention of a sample of women physicians who work reduced hours.
Barnett, Rosalind C; Gareis, Karen C; Carr, Phyllis L
2005-03-01
To better understand the career satisfaction and factors related to retention of women physicians who work reduced hours and are in dual-earner couples in comparison to their full-time counterparts. Survey of a random sample of female physicians between 25 and 50 years of age working within 25 miles of Boston, whose names were obtained from the Board of Registration in Medicine in Massachusetts. Interviewers conducted a 60-minute face-to-face closed-ended interview after interviewees completed a 20-minute mailed questionnaire. Fifty-one full-time physicians and 47 reduced hours physicians completed the study; the completion rate was 49.5%. The two groups were similar in age, years as a physician, mean household income, number of children, and presence of an infant in the home. Reduced hours physicians in this sample had a different relationship to experiences in the family than full-time physicians. (1) When reduced hours physicians had low marital role quality, there was an associated lower career satisfaction; full-time physicians report high career satisfaction regardless of their marital role quality. (2) When reduced hours physicians had low marital role or parental role quality, there was an associated higher intention to leave their jobs than for full-time physicians; when marital role or parental role quality was high, there was an associated lower intention to leave their jobs than for full-time physicians. (3) When reduced hours physicians perceived that work interfering with family was high, there was an associated greater intention to leave their jobs that was not apparent for full-time physicians. Women physicians in this sample who worked reduced hours had stronger relationships between family experiences (marital and parental role quality and work interference with family) and professional outcomes than had their full-time counterparts. Both career satisfaction and intention to leave their employment are correlated with the quality of home life for
Moustakas, Aristides; Evans, Matthew R
2015-02-28
Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.
Tang, Yongqiang
2015-01-01
A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.
Directory of Open Access Journals (Sweden)
Christopher Ryan Penton
2016-06-01
Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.
IN SITU NON-INVASIVE SOIL CARBON ANALYSIS: SAMPLE SIZE AND GEOSTATISTICAL CONSIDERATIONS.
Energy Technology Data Exchange (ETDEWEB)
WIELOPOLSKI, L.
2005-04-01
be sampled. It is highly desirable to assess properly the sampled volume for reporting the absolute value of the measured carbon. At the same time, increasing the number of detectors surrounding the NG can reduce error propagation. In the present work, only the volume irradiated by the neutrons was estimated. It should be pointed that the carbon yield is also affected by the neutron energy spectrum that changes with depth. Thus, all these considerations must be considered carefully when evaluating the detectors' configuration and the resulting counting efficiency. In summary, INS system is a novel approach for non-destructive carbon analysis in soil with very unique features. It should contribute in assessing soil carbon inventories and assist in understanding belowground carbon processes. The complexity of carbon distribution in soil requires a special attention when calibrating the INS system, and a consensus developed on the most favorable way to report carbon abundance. Clearly, this will affect the calibration procedures.
Suarez Diez, M.; Saccenti, E.
2015-01-01
We investigated the effect of sample size and dimensionality on the performance of four algorithms (ARACNE, CLR, CORR, and PCLRC) when they are used for the inference of metabolite association networks. We report that as many as 100-400 samples may be necessary to obtain stable network estimations,
Walzer, Andreas; Schausberger, Peter
2013-01-01
Food limitation early in life may be compensated for by developmental plasticity resulting in accelerated development enhancing survival at the expense of small adult body size. However and especially for females in non-matching maternal and offspring environments, being smaller than the standard may incur considerable intra- and trans-generational costs. Here, we evaluated the costs of small female body size induced by food limitation early in life in the sexually size-dimorphic predatory mite Phytoseiulus persimilis. Females are larger than males. These predators are adapted to exploit ephemeral spider mite prey patches. The intra- and trans-generational effects of small maternal body size manifested in lower maternal survival probabilities, decreased attractiveness for males, and a reduced number and size of eggs compared to standard-sized females. The trans-generational effects of small maternal body size were sex-specific with small mothers producing small daughters but standard-sized sons. Small female body size apparently intensified the well-known costs of sexual activity because mortality of small but not standard-sized females mainly occurred shortly after mating. The disadvantages of small females in mating and egg production may be generally explained by size-associated morphological and physiological constraints. Additionally, size-assortative mate preferences of standard-sized mates may have rendered small females disproportionally unattractive mating partners. We argue that the sex-specific trans-generational effects were due to sexual size dimorphism - females are the larger sex and thus more strongly affected by maternal stress than the smaller males - and to sexually selected lower plasticity of male body size.
Walzer, Andreas; Schausberger, Peter
2013-01-01
Background Food limitation early in life may be compensated for by developmental plasticity resulting in accelerated development enhancing survival at the expense of small adult body size. However and especially for females in non-matching maternal and offspring environments, being smaller than the standard may incur considerable intra- and trans-generational costs. Methodology/Principal Findings Here, we evaluated the costs of small female body size induced by food limitation early in life in the sexually size-dimorphic predatory mite Phytoseiulus persimilis. Females are larger than males. These predators are adapted to exploit ephemeral spider mite prey patches. The intra- and trans-generational effects of small maternal body size manifested in lower maternal survival probabilities, decreased attractiveness for males, and a reduced number and size of eggs compared to standard-sized females. The trans-generational effects of small maternal body size were sex-specific with small mothers producing small daughters but standard-sized sons. Conclusions/Significance Small female body size apparently intensified the well-known costs of sexual activity because mortality of small but not standard-sized females mainly occurred shortly after mating. The disadvantages of small females in mating and egg production may be generally explained by size-associated morphological and physiological constraints. Additionally, size-assortative mate preferences of standard-sized mates may have rendered small females disproportionally unattractive mating partners. We argue that the sex-specific trans-generational effects were due to sexual size dimorphism – females are the larger sex and thus more strongly affected by maternal stress than the smaller males – and to sexually selected lower plasticity of male body size. PMID:24265745
Directory of Open Access Journals (Sweden)
Andreas Walzer
Full Text Available Food limitation early in life may be compensated for by developmental plasticity resulting in accelerated development enhancing survival at the expense of small adult body size. However and especially for females in non-matching maternal and offspring environments, being smaller than the standard may incur considerable intra- and trans-generational costs.Here, we evaluated the costs of small female body size induced by food limitation early in life in the sexually size-dimorphic predatory mite Phytoseiulus persimilis. Females are larger than males. These predators are adapted to exploit ephemeral spider mite prey patches. The intra- and trans-generational effects of small maternal body size manifested in lower maternal survival probabilities, decreased attractiveness for males, and a reduced number and size of eggs compared to standard-sized females. The trans-generational effects of small maternal body size were sex-specific with small mothers producing small daughters but standard-sized sons.Small female body size apparently intensified the well-known costs of sexual activity because mortality of small but not standard-sized females mainly occurred shortly after mating. The disadvantages of small females in mating and egg production may be generally explained by size-associated morphological and physiological constraints. Additionally, size-assortative mate preferences of standard-sized mates may have rendered small females disproportionally unattractive mating partners. We argue that the sex-specific trans-generational effects were due to sexual size dimorphism - females are the larger sex and thus more strongly affected by maternal stress than the smaller males - and to sexually selected lower plasticity of male body size.
Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws
Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.
2009-04-01
Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W
On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.
Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui
2011-03-01
As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.
Evaluation of 1H NMR relaxometry for the assessment of pore size distribution in soil samples
Jaeger, F.; Bowe, S.; As, van H.; Schaumann, G.E.
2009-01-01
1H NMR relaxometry is used in earth science as a non-destructive and time-saving method to determine pore size distributions (PSD) in porous media with pore sizes ranging from nm to mm. This is a broader range than generally reported for results from X-ray computed tomography (X-ray CT) scanning,
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Reduced aliasing artifacts using shaking projection k-space sampling trajectory
Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian
2014-03-01
Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.
Reduced aliasing artifacts using shaking projection k-space sampling trajectory
International Nuclear Information System (INIS)
Zhu Yan-Chun; Yang Wen-Chao; Wang Hao-Yu; Gao Song; Bao Shang-Lian; Du Jiang; Duan Chai-Jie
2014-01-01
Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts
Overall, John E; Tonidandel, Scott; Starbuck, Robert R
2006-01-01
Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Use of intradermal botulinum toxin to reduce sebum production and facial pore size.
Shah, Anil R
2008-09-01
Review the safety profile and subjective efficacy of intradermal botulinum toxin type A in facial pore size and sebum production. Retrospective analysis of 20 patients. Twenty consecutive patients with a single application of intradermal botulinum toxin type A were examined: Patients (17/20) noted an improvement in sebum production and a decrease in pores size at 1 month after injection. No complications were observed, and 17/20 patients were satisfied with the procedure. Preliminary data suggests that intradermal botulinum toxin may play a role in decreasing sebum production. Further quantitive study may be necessary to determine effects of intradermal botulinum toxin on pore size.
Tang, Chia-Yu; Lai, Chang-Chi; Chiang, Shu-Chiung; Tseng, Kuo-Wei; Huang, Cheng-Hsiung
2015-09-01
We have previously reported that brief pressure overload of the left ventricle reduced myocardial infarct (MI) size. However, the role of protein kinase C (PKC) remains uncertain. In this study, we investigated whether pressure overload reduces MI size by activating PKC. MI was induced by a 40-minute occlusion of the left anterior descending coronary artery and a 3-hour reperfusion in anesthetized Sprague-Dawley rats. MI size was determined using triphenyl tetrazolium chloride staining. Brief pressure overload was achieved by two 10-minute partial snarings of the ascending aorta, raising the systolic left ventricular pressure 50% above the baseline value. Ischemic preconditioning was elicited by two 10-minute coronary artery occlusions and 10-minute reperfusions. Dimethyl sulfoxide (vehicle) or calphostin C (0.1 mg/kg, a specific inhibitor of PKC) was administered intravenously as pretreatment. The MI size, expressed as the percentage of the area at risk, was significantly reduced in the pressure overload group and the ischemic preconditioning group (19.0 ± 2.9% and 18.7 ± 3.0% vs. 26.1 ± 2.6% in the control group, where p overload and ischemic preconditioning (25.2 ± 2.4% and 25.0 ± 2.3%, where p overload of the left ventricle reduced MI size. Since calphostin C significantly limited the decrease of MI size, our results suggested that brief pressure overload reduces MI size via activation of PKC. Copyright © 2015. Published by Elsevier Taiwan.
Lee, Paul H
2016-08-01
This study aims to show that under several assumptions, in randomized controlled trials (RCTs), unadjusted, crude analysis will underestimate the Cohen's d effect size of the treatment, and an unbiased estimate of effect size can be obtained only by adjusting for all predictors of the outcome. Four simulations were performed to examine the effects of adjustment on the estimated effect size of the treatment and power of the analysis. In addition, we analyzed data from the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) study (older adults aged 65-94), an RCT with three treatment arms and one control arm. We showed that (1) the number of unadjusted covariates was associated with the effect size of the treatment; (2) the biasedness of effect size estimation was minimized if all covariates were adjusted for; (3) the power of the statistical analysis slightly decreased with the number of adjusted noise variables; and (4) exhaustively searching the covariates and noise variables adjusted for can lead to exaggeration of the true effect size. Analysis of the ACTIVE study data showed that the effect sizes adjusting for covariates of all three treatments were 7.39-24.70% larger than their unadjusted counterparts, whereas the effect size would be elevated by at most 57.92% by exhaustively searching the variables adjusted for. All covariates of the outcome in RCTs should be adjusted for, and if the effect of a particular variable on the outcome is unknown, adjustment will do more good than harm. Copyright © 2016 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.
2010-01-01
Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2013-04-15
Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.
Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav
2018-04-01
Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.
DEFF Research Database (Denmark)
Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.
2008-01-01
OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...
Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M
2010-10-01
Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.
Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.
2015-12-01
Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap
Directory of Open Access Journals (Sweden)
Peng-Cheng Yao
Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.
Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.
2017-10-01
Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.
Bi, Ran; Liu, Peng
2016-03-31
RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).
Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils
International Nuclear Information System (INIS)
Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.
1977-01-01
Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)
DEFF Research Database (Denmark)
Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.
2013-01-01
and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...
Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy
2016-01-01
Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed
Algorithm of Data Reduce in Determination of Aerosol Particle Size Distribution at Damps/C
International Nuclear Information System (INIS)
Muhammad-Priyatna; Otto-Pribadi-Ruslanto
2001-01-01
The analysis had to do for algorithm of data reduction on Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system, this is for determine aerosol particle size distribution with range 0,01 μm to 1 μm in diameter. Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system contents are software and hardware. The hardware used determine of mobilities of aerosol particle and so the software used determine aerosol particle size distribution in diameter. The mobilities and diameter particle had connection in the electricity field. That is basic program for reduction of data and particle size conversion from particle mobility become particle diameter. The analysis to get transfer function value, Ω, is 0.5. The data reduction program to do conversation mobility basis become diameter basis with number efficiency correction, transfer function value, and poly charge particle. (author)
Fencl, Martin; Jörg, Rieckermann; Vojtěch, Bareš
2015-04-01
Commercial microwave links (MWL) are point-to-point radio systems which are used in backhaul networks of cellular operators. For several years, they have been suggested as rainfall sensors complementary to rain gauges and weather radars, because, first, they operate at frequencies where rain drops represent significant source of attenuation and, second, cellular networks almost completely cover urban and rural areas. Usually, path-average rain rates along a MWL are retrieved from the rain-induced attenuation of received MWL signals with a simple model based on a power law relationship. The model is often parameterized based on the characteristics of a particular MWL, such as frequency, polarization and the drop size distribution (DSD) along the MWL. As information on the DSD is usually not available in operational conditions, the model parameters are usually considered constant. Unfortunately, this introduces bias into rainfall estimates from MWL. In this investigation, we propose a generic method to eliminate this bias in MWL rainfall estimates. Specifically, we search for attenuation statistics which makes it possible to classify rain events into distinct groups for which same power-law parameters can be used. The theoretical attenuation used in the analysis is calculated from DSD data using T-Matrix method. We test the validity of our approach on observations from a dedicated field experiment in Dübendorf (CH) with a 1.85-km long commercial dual-polarized microwave link transmitting at a frequency of 38 GHz, an autonomous network of 5 optical distrometers and 3 rain gauges distributed along the path of the MWL. The data is recorded at a high temporal resolution of up to 30s. It is further tested on data from an experimental catchment in Prague (CZ), where 14 MWLs, operating at 26, 32 and 38 GHz frequencies, and reference rainfall from three RGs is recorded every minute. Our results suggest that, for our purpose, rain events can be nicely characterized based on
Increasing age and tear size reduce rotator cuff repair healing rate at 1 year.
Rashid, Mustafa S; Cooper, Cushla; Cook, Jonathan; Cooper, David; Dakin, Stephanie G; Snelling, Sarah; Carr, Andrew J
2017-12-01
Background and purpose - There is a need to understand the reasons why a high proportion of rotator cuff repairs fail to heal. Using data from a large randomized clinical trial, we evaluated age and tear size as risk factors for failure of rotator cuff repair. Patients and methods - Between 2007 and 2014, 65 surgeons from 47 hospitals in the National Health Service (NHS) recruited 447 patients with atraumatic rotator cuff tendon tears to the United Kingdom Rotator Cuff Trial (UKUFF) and 256 underwent rotator cuff repair. Cuff integrity was assessed by imaging in 217 patients, at 12 months post-operation. Logistic regression analysis was used to determine the influence of age and intra-operative tear size on healing. Hand dominance, sex, and previous steroid injections were controlled for. Results - The overall healing rate was 122/217 (56%) at 12 months. Healing rate decreased with increasing tear size (small tears 66%, medium tears 68%, large tears 47%, and massive tears 27% healed). The mean age of patients with a healed repair was 61 years compared with 64 years for those with a non-healed repair. Mean age increased with larger tear sizes (small tears 59 years, medium tears 62 years, large tears 64 years, and massive tears 66 years). Increasing age was an independent factor that negatively influenced healing, even after controlling for tear size. Only massive tears were an independent predictor of non-healing, after controlling for age. Interpretation - Although increasing age and larger tear size are both risks for failure of rotator cuff repair healing, age is the dominant risk factor.
Lewis, HB; Ahern, AL; Solis-Trapala, I; Walker, CG; Reimann, F; Gribble, FM; Jebb, SA
2015-01-01
OBJECTIVE: Larger portion sizes (PS) are associated with greater energy intake (EI), but little evidence exists on the appetitive effects of PS reduction. This study investigated the impact of reducing breakfast PS on subsequent EI, postprandial gastrointestinal hormone responses, and appetite ratings. METHODS: In a randomized crossover design (n = 33 adults; mean BMI 29 kg/m(2) ), a compulsory breakfast was based on 25% of gender-specific estimated daily energy requirements; PS was reduced b...
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Proton pump inhibitors reduce the size and acidity of the acid pocket in the stomach
Rohof, Wout O.; Bennink, Roelof J.; Boeckxstaens, Guy E.
2014-01-01
The gastric acid pocket is believed to be the reservoir from which acid reflux events originate. Little is known about how changes in position, size, and acidity of the acid pocket contribute to the therapeutic effect of proton pump inhibitors (PPIs) in patients with gastroesophageal reflux disease
Motor-Reducer Sizing through a MATLAB-Based Graphical Technique
Giberti, H.; Cinquemani, S.
2012-01-01
The design of the drive system for an automatic machine and its correct sizing is a very important competence for an electrical or mechatronic engineer. This requires knowledge that crosses the fields of electrical engineering, electronics and mechanics, as well as the skill to choose commercial components based upon their technical documentation.…
van Koningsbruggen, G.M.; Veling, H.P.; Stroebe, Wolfgang; Aarts, Henk
2014-01-01
ObjectivePalatable food, such as sweets, contains properties that automatically trigger the impulse to consume it even when people have goals or intentions to refrain from consuming such food. We compared the effectiveness of two interventions in reducing the portion size of palatable food that
Jalbert, Sarah Kuck; Rhodes, William; Flygare, Christopher; Kane, Michael
2010-01-01
Probation and parole professionals argue that supervision outcomes would improve if caseloads were reduced below commonly achieved standards. Criminal justice researchers are skeptical because random assignment and strong observation studies have failed to show that criminal recidivism falls with reductions in caseload sizes. One explanation is…
DEFF Research Database (Denmark)
Knoblauch, C.; Jørgensen, BB; Harder, J.
1999-01-01
The numbers of sulfate reducers in two Arctic sediments within situ temperatures of 2.6 and -1.7 degrees C were determined. Most-probable-number counts were higher at 10 degrees C than at 20 degrees C, indicating the predominance of a psychrophilic community. Mean specific sulfate reduction rates...... of 19 isolated psychrophiles were compared to corresponding rates of 9 marine, mesophilic sulfate-reducing bacteria. The results indicate that, as a physiological adaptation to the permanently cold Arctic environment, psychrophilic sulfate reducers have considerably higher specific metabolic rates than...... their mesophilic counterparts at similarly low temperatures....
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Note: A new method for directly reducing the sampling jitter noise of the digital phasemeter
Liang, Yu-Rong
2018-03-01
The sampling jitter noise is one non-negligible noise source of the digital phasemeter used for space gravitational wave detection missions. This note provides a new method for directly reducing the sampling jitter noise of the digital phasemeter, by adding a dedicated signal of which the frequency, amplitude, and initial phase should be pre-set. In contrast to the phase correction using the pilot-tone in the work of Burnett, Gerberding et al., Liang et al., Ales et al., Gerberding et al., and Ware et al. [M.Sc. thesis, Luleå University of Technology, 2010; Classical Quantum Gravity 30, 235029 (2013); Rev. Sci. Instrum. 86, 016106 (2015); Rev. Sci. Instrum. 86, 084502 (2015); Rev. Sci. Instrum. 86, 074501 (2015); and Proceedings of the Earth Science Technology Conference (NASA, USA, 2006)], the new method is intrinsically additive noise suppression. The experiment results validate that the new method directly reduces the sampling jitter noise without data post-processing and provides the same phase measurement noise level (10-6 rad/Hz1/2 at 0.1 Hz) as the pilot-tone correction.
Directory of Open Access Journals (Sweden)
Khewal Bhupendra Kesur
2013-01-01
Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.
van Rijnsoever, F.J.
2015-01-01
This paper explores the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
van Rijnsoever, Frank J.
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in
Heymann, D.; Lakatos, S.; Walton, J. R.
1973-01-01
Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.
Uijlenhoet, R.; Porrà, J.M.; Sempere Torres, D.; Creutin, J.D.
2006-01-01
A stochastic model of the microstructure of rainfall is used to derive explicit expressions for the magnitude of the sampling fluctuations in rainfall properties estimated from raindrop size measurements in stationary rainfall. The model is a marked point process, in which the points represent the
Tang, Yongqiang
2017-05-25
We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.
Enhancement of coercivity with reduced grain size in CoCrPt film grown by pulsed laser deposition
International Nuclear Information System (INIS)
Liang, Q.; Hu, X.F.; Li, H.Q.; He, X.X.; Wang, Xiaoru; Zhang, W.
2006-01-01
We report a pulsed laser deposition (PLD) growth of VMn/CoCrPt bilayer with a magnetic coercivity (H c ) of 2.2 kOe and a grain size of 12 nm. The effects of VMn underlayer on magnetic properties of CoCrPt layer were studied. The coercivity, H c , and squareness, S, of VMn/CoCrPt bilayer, is dependent on the thickness of VMn. The grain size of the CoCrPt film can also be modified by laser parameters. High laser fluence used for CoCrPt deposition produces a smaller grain size. Enhanced H c and reduced grain size in VMn/CoCrPt is explained by more pronounced surface phase segregation during deposition at high laser fluence
Basic distribution free identification tests for small size samples of environmental data
International Nuclear Information System (INIS)
Federico, A.G.; Musmeci, F.
1998-01-01
Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-...
DEFF Research Database (Denmark)
Vestergaard, Peter; Rønn, Regin; Christensen, Søren
2001-01-01
in soils amended with the large pieces on nine out of 10 occasions. Microbial biomass measured as SIR was significantly higher in soils with maize than in those amended with barley, but no effect of particle size was observed (three-way ANOVA, P... material, but significantly higher numbers were found in soil with finely-ground maize than in soil with large pieces (two-way ANOVA, P... barley (three-way ANOVA, P
Flaw-size measurement in a weld samples by ultrasonic frequency analysis
International Nuclear Information System (INIS)
Adler, L.; Cook, K.V.; Whaley, H.L. Jr.; McClung, R.W.
1975-01-01
An ultrasonic frequency-analysis technique was developed and applies to characterize flaws in an 8-in. (203-mm) thick heavy-section steel weld specimen. The technique applies a multitransducer system. The spectrum of the received broad-band signal is frequency analyzed at two different receivers for each of the flaws. From the two spectra, the size and orientation of the flaw are determined by the use of an analytic model proposed earlier. (auth)
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Influence sample sizing of citrus hystrix essential oil from hydrodistillation extraction
Yahya, A.; Amadi, I.; Hashib, S. A.; Mustapha, F. A.
2018-03-01
Essential oil extracted from kaffir lime leaves through hydrodistillation. The objective of this study is to quantify the oil production rate by identify the significant influence of particle size on kaffir lime leaves. Kaffir lime leaves were ground and separated by using siever into 90, 150, 300 μm and other kaffir lime leaves. The mean essential oil yield of 0.87, 0.52, 0.41 and 0.3% was obtained. 90 μm of ground gives the highest yield compared to other sizes. Thus, it can be concluded that in quantifying oil production rate, the relevance of different size of particle is clearly affects the amount of oil yield. In analysing the composition of kaffir lime essential oil using GC-MS, there were 38 compounds found in the essential oil. Some of the major compounds of the kaffir lime leave oils were detected while some are not, may due to oil experience thermal degradation which consequently losing some significant compounds in controlled temperature.
Mishra, S; Chawla, D; Agarwal, R; Deorari, A K; Paul, V K; Bhutani, V K
2009-12-01
We determined usefulness of transcutaneous bilirubinometry to decrease the need for blood sampling to assay serum total bilirubin (STB) in the management of jaundiced healthy Indian neonates. Newborns, > or =35 weeks' gestation, with clinical evidence of jaundice were enrolled in an institutional approved randomized clinical trial. The severity of hyperbilirubinaemia was determined by two non-invasive methods: i) protocol-based visual assessment of bilirubin (VaB) and ii) transcutaneous bilirubin (TcB) determination (BiliCheck). By a random allocation, either method was used to decide the need for blood sampling, which was defined to be present if assessed STB by allocated method exceeded 80% of hour-specific threshold values for phototherapy (2004 AAP Guidelines). A total of 617 neonates were randomized to either TcB (n = 314) or VaB (n = 303) groups with comparable gestation, birth weight and postnatal age. Need for blood sampling to assay STB was 34% lower (95% CI: 10% to 51%) in the TcB group compared with VaB group (17.5% vs 26.4% assessments; risk difference: -8.9%, 95% CI: -2.4% to -15.4%; p = 0.008). Routine use of transcutaneous bilirubinometry compared with systematic visual assessment of bilirubin significantly reduced the need for blood sampling to assay STB in jaundiced term and late-preterm neonates. (ClinicalTrials.gov number, NCT00653874).
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
International Nuclear Information System (INIS)
John L. Bowen; Rowena Gonzalez; David S. Shafer
2001-01-01
As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site
Directory of Open Access Journals (Sweden)
Aidan G. O’Keeffe
2017-12-01
Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.
International Nuclear Information System (INIS)
Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.
2004-01-01
Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)
Deletion of Irs2 causes reduced kidney size in mice: role for inhibition of GSK3beta?
LENUS (Irish Health Repository)
Carew, Rosemarie M.
2010-07-06
Abstract Background Male Irs2-\\/- mice develop fatal type 2 diabetes at 13-14 weeks. Defects in neuronal proliferation, pituitary development and photoreceptor cell survival manifest in Irs2-\\/- mice. We identify retarded renal growth in male and female Irs2-\\/- mice, independent of diabetes. Results Kidney size and kidney:body weight ratio were reduced by approximately 20% in Irs2-\\/- mice at postnatal day 5 and was maintained in maturity. Reduced glomerular number but similar glomerular density was detected in Irs2-\\/- kidney compared to wild-type, suggesting intact global kidney structure. Analysis of insulin signalling revealed renal-specific upregulation of PKBβ\\/Akt2, hyperphosphorylation of GSK3β and concomitant accumulation of β-catenin in Irs2-\\/- kidney. Despite this, no significant upregulation of β-catenin targets was detected. Kidney-specific increases in Yes-associated protein (YAP), a key driver of organ size were also detected in the absence of Irs2. YAP phosphorylation on its inhibitory site Ser127 was also increased, with no change in the levels of YAP-regulated genes, suggesting that overall YAP activity was not increased in Irs2-\\/- kidney. Conclusions In summary, deletion of Irs2 causes reduced kidney size early in mouse development. Compensatory mechanisms such as increased β-catenin and YAP levels failed to overcome this developmental defect. These data point to Irs2 as an important novel mediator of kidney size.
Thiemermann, C.; Thomas, G. R.; Vane, J. R.
1989-01-01
1. Defibrotide, a single-stranded polydeoxyribonucleotide obtained from bovine lungs, has significant anti-thrombotic, pro-fibrinolytic and prostacyclin-stimulating properties. 2. The present study was designed to evaluate the effects of defibrotide on infarct size and regional myocardial blood flow in a rabbit model of myocardial ischaemia and reperfusion. 3. Defibrotide (32 mg kg-1 bolus + 32 mg kg-1 h-1, i.v.) either with or without co-administration of indomethacin (5 mg kg-1 x 2, i.v.) was administered 5 min after occlusion of the left anterior-lateral coronary artery and continued during the 60 min occlusion and subsequent 3 h reperfusion periods. 4. Defibrotide significantly attenuated the ischaemia-induced ST-segment elevation and abolished the reperfusion-related changes (R-wave reduction and Q-wave development) in the electrocardiogram. In addition, defibrotide significantly improved myocardial blood flow in normal and in ischaemic, but not in infarcted sections of the heart. The improvement in blood flow in normal perfused myocardium, but not in the ischaemic area was prevented by indomethacin. 5. Although the area at risk was similar in all animal groups studied, defibrotide treatment resulted in a 51% reduction of infarct size. Indomethacin treatment abolished the reduction of infarct size seen with defibrotide alone. 6. The data demonstrate a considerable cardioprotective effect of defibrotide in the reperfused ischaemic rabbit myocardium. This effect may be related, at least in part, to a stimulation of endogenous prostaglandin formation. Other possible mechanisms are discussed. PMID:2758223
Sample size and saturation in PhD studies using qualitative interviews
Mason, Mark
2010-01-01
Sample-Größen sind in qualitativen Forschungsarbeiten von verschiedenen Einflüssen abhängig. Das Leitprinzip sollte jedoch immer die Sättigung, bezogen auf das jeweilige Forschungsthema sein. Diese Frage, mit der sich viele Autor/innen beschäftigt haben, wird weiter heiß diskutiert und – so einige – kaum hinreichend verstanden. Für eine eigene Untersuchung habe ich ein Sample von PhD-Studien, in denen qualitative Interviews als Erhebungsmethode genutzt wurde, aus theses.com gezogen und ...
DEFF Research Database (Denmark)
Jensen, J S; Borch-Johnsen, K; Deckert, T
1995-01-01
The pathophysiologic mechanism behind microalbuminuria, a potential atherosclerotic risk factor, was explored by measuring fractional clearances of four endogenous plasma proteins of different size and electric charge (albumin, beta 2-microglobulin, immunoglobulin G, and immunoglobulin G4). Twenty......-eight clinically healthy individuals with microalbuminuria, defined as a urinary albumin excretion of 6.6-150 micrograms min-1, and 60 matched control subjects were studied. Fractional immunoglobulin G clearance was higher (geometric means (95% confidence intervals)) 3.0 (2.3-3.9) x 10(-6), n = 28, vs. 2.1 (1...
High-intensity training reduces intermittent hypoxia-induced ER stress and myocardial infarct size.
Bourdier, Guillaume; Flore, Patrice; Sanchez, Hervé; Pepin, Jean-Louis; Belaidi, Elise; Arnaud, Claire
2016-01-15
Chronic intermittent hypoxia (IH) is described as the major detrimental factor leading to cardiovascular morbimortality in obstructive sleep apnea (OSA) patients. OSA patients exhibit increased infarct size after a myocardial event, and previous animal studies have shown that chronic IH could be the main mechanism. Endoplasmic reticulum (ER) stress plays a major role in the pathophysiology of cardiovascular disease. High-intensity training (HIT) exerts beneficial effects on the cardiovascular system. Thus, we hypothesized that HIT could prevent IH-induced ER stress and the increase in infarct size. Male Wistar rats were exposed to 21 days of IH (21-5% fraction of inspired O2, 60-s cycle, 8 h/day) or normoxia. After 1 wk of IH alone, rats were submitted daily to both IH and HIT (2 × 24 min, 15-30m/min). Rat hearts were either rapidly frozen to evaluate ER stress by Western blot analysis or submitted to an ischemia-reperfusion protocol ex vivo (30 min of global ischemia/120 min of reperfusion). IH induced cardiac proapoptotic ER stress, characterized by increased expression of glucose-regulated protein kinase 78, phosphorylated protein kinase-like ER kinase, activating transcription factor 4, and C/EBP homologous protein. IH-induced myocardial apoptosis was confirmed by increased expression of cleaved caspase-3. These IH-associated proapoptotic alterations were associated with a significant increase in infarct size (35.4 ± 3.2% vs. 22.7 ± 1.7% of ventricles in IH + sedenary and normoxia + sedentary groups, respectively, P < 0.05). HIT prevented both the IH-induced proapoptotic ER stress and increased myocardial infarct size (28.8 ± 3.9% and 21.0 ± 5.1% in IH + HIT and normoxia + HIT groups, respectively, P = 0.28). In conclusion, these findings suggest that HIT could represent a preventive strategy to limit IH-induced myocardial ischemia-reperfusion damages in OSA patients. Copyright © 2016 the American Physiological Society.
Energy Technology Data Exchange (ETDEWEB)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.
Selection for number of live piglets at five-days of age increased litter size and reduced mortality
DEFF Research Database (Denmark)
Nielsen, Bjarne; Madsen, Per; Henryon, Mark
2012-01-01
. The heritabilities of maternal effect on litter size were 0.079 and 0.095 in Landrace and Yorkshir e. The heritabilities of maternal effect on piglet-mortality rates were 0.069 and 0.082 in Landrace and Yorkshire. The genetic correlation between litter size and mortality rate were unfavourable; and the estimates......-netic gain has reduced the piglet mortality rate by 4 %-points in Landrace and Yorkshire from 2004 to 2010. The genetics gain was confirmed by decreased phenotypic annual mortality rates in the breeding and multiplier herds....
International Nuclear Information System (INIS)
Valalaki, K; Nassiopoulou, A G; Vouroutzis, N
2016-01-01
The thermoelectric properties of p-type polycrystalline silicon thin films deposited by low pressure chemical vapour deposition (LPCVD) were accurately determined at room temperature and the thermoelectric figure of merit was deduced as a function of film thickness, ranging from 100 to 500 nm. The effect of film thickness on their thermoelectric performance is discussed. More than threefold increase in the thermoelectric figure of merit of the 100 nm thick polysilicon film was observed compared to the 500 nm thick film, reaching a value as high as 0.033. This enhancement is mainly the result of the smaller grain size in the thinner films. With the decrease in grain size the resistivity of the films is increased twofold and electrical conductivity decreased, however the Seebeck coefficient is increased by 30% and the thermal conductivity is decreased eightfold, being mainly at the origin of the increased figure of merit of the 100 nm film. Our experimental results were compared to known theoretical models and the possible mechanisms involved are presented and discussed. (paper)
Determination of samples with TSP size at PLTU Pacitan, Jawa Timur have been done
International Nuclear Information System (INIS)
Rusmanto, Tri; Mulyono; Irianto, Bambang
2013-01-01
Sampling is done using equipment High Volume Air Sampler (HVAS) and analysis using gamma spectrometer. Sampling at 3 locations, each location of the sampling carried out 24 hours, air samples on filter conditioned at room temperature, weighed to a contained weight, counting for 24 hours with gamma spectrometer. The result of qualitative and quantitative analysis of filter TSP was contained of locations I Ra-226 = 0,000888 Bq/m 3 , Pb-212 = 0,000356 Bq/m 3 , Pb-214 = 0,000859 Bq/m 3 , Bi-214 = 0,000712 Bq/m 3 , Ac-228 = 0,004447 Bq/m 3 , K-40 = 0,035454 Bq/m 3 ) , Locations II Ra-226 = 0,00113 Bq/m 3 , Pb-212 = 0,00079 Bq/m 3 , Pb-214 = 0,001351 Bq/m 3 , Bi-214 = 0,000433 Bq/m 3 , Ac-228 = 0,007138 Bq/m 3 , K-40 = 0,018532 Bq/m 3 , Locations III Ra-226 = 0,001424 Bq/m 3 , Pb-212 = 0,000208 Bq/m 3 , Pb-214 = 000052 Bq/m 3 , Bi-214 = 0,001408 Bq/m 3 , Ac-228 = 0,008362 Bq/m 3 , K-40 = 0,020536 Bq/m 3 . Radionuclides activity was all still below quality of air enabled by BAPETEN. Become the activities of ambient air of PLTU area still be peaceful enough as settlement area. (author)
In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size
Directory of Open Access Journals (Sweden)
Stefano Schiavon
2010-01-01
Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.
Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately
Directory of Open Access Journals (Sweden)
John M Lachin
Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to
DEFF Research Database (Denmark)
Bune, Laurids Touborg; Larsen, Jens Kjærgaard Rolighed; Thaning, Pia
2013-01-01
Acute myocardial infarction continues to be a major cause of morbidity and mortality. Timely reperfusion can substantially improve outcomes and the administration of cardioprotective substances during reperfusion is therefore highly attractive. Adenosine diphosphate (ADP) and uridine-5-triphoshat...... infusion during reperfusion reduces IS by ~20% independently from systemic release of t-PA. ADP-induced reduction in both preload and afterload could account for the beneficial myocardial effect....
Face inversion and acquired prosopagnosia reduce the size of the perceptual field of view.
Van Belle, Goedele; Lefèvre, Philippe; Rossion, Bruno
2015-03-01
Using a gaze-contingent morphing approach, we asked human observers to choose one of two faces that best matched the identity of a target face: one face corresponded to the reference face's fixated part only (e.g., one eye), the other corresponded to the unfixated area of the reference face. The face corresponding to the fixated part was selected significantly more frequently in the inverted than in the upright orientation. This observation provides evidence that face inversion reduces an observer's perceptual field of view, even when both upright and inverted faces are displayed at full view and there is no performance difference between these conditions. It rules out an account of the drop of performance for inverted faces--one of the most robust effects in experimental psychology--in terms of a mere difference in local processing efficiency. A brain-damaged patient with pure prosopagnosia, viewing only upright faces, systematically selected the face corresponding to the fixated part, as if her perceptual field was reduced relative to normal observers. Altogether, these observations indicate that the absence of visual knowledge reduces the perceptual field of view, supporting an indirect view of visual perception. Copyright © 2014 Elsevier B.V. All rights reserved.
A case of gastric endocrine cell carcinoma which was significantly reduced in size by radiotherapy
International Nuclear Information System (INIS)
Azakami, Kiyoshi; Nishida, Kouji; Tanikawa, Ken
2016-01-01
In 2010, the World Health Organization classified gastric neuroendocrine tumors (NETs) into three types: NET grade (G) 1, NET G2 and neuroendocrine carcinoma (NEC). NECs are associated with a very poor prognosis. The patient was an 84-year-old female who was initially diagnosed by gastrointestinal endoscope with type 3 advanced gastric cancer with stenosis of the gastric cardia. Her overall status and performance status did not allow for operations or intensive chemotherapy. Palliative radiotherapy was performed and resulted in a significant reduction in the size of the tumor as well as the improvement of the obstructive symptoms. She died 9 months after radiotherapy. An autopsy provided a definitive diagnosis of gastric endocrine cell carcinoma, and the effectiveness of radiotherapy was pathologically-confirmed. Palliative radiotherapy may be a useful treatment option for providing symptom relief, especially for old patients with unresectable advanced gastric neuroendocrine carcinoma. (author)
Effect of model choice and sample size on statistical tolerance limits
International Nuclear Information System (INIS)
Duran, B.S.; Campbell, K.
1980-03-01
Statistical tolerance limits are estimates of large (or small) quantiles of a distribution, quantities which are very sensitive to the shape of the tail of the distribution. The exact nature of this tail behavior cannot be ascertained brom small samples, so statistical tolerance limits are frequently computed using a statistical model chosen on the basis of theoretical considerations or prior experience with similar populations. This report illustrates the effects of such choices on the computations
Ultrasonic detection and sizing of cracks in cast stainless steel samples
International Nuclear Information System (INIS)
Allidi, F.; Edelmann, X.; Phister, O.; Hoegberg, K.; Pers-Anderson, E.B.
1986-01-01
The test consisted of 15 samples of cast stainless steel, each with a weld. Some of the specimens were provided with artificially made thermal fatique cracks. The inspection was performed with the P-scan method. The investigations showed an improvement of recognizability relative to earlier investigations. One probe, the dual type, longitudinal wave 45 degrees, low frequence 0.5-1 MHz gives the best results. (G.B.)
Second generation laser-heated microfurnace for the preparation of microgram-sized graphite samples
Energy Technology Data Exchange (ETDEWEB)
Yang, Bin; Smith, A.M.; Long, S.
2015-10-15
We present construction details and test results for two second-generation laser-heated microfurnaces (LHF-II) used to prepare graphite samples for Accelerator Mass Spectrometry (AMS) at ANSTO. Based on systematic studies aimed at optimising the performance of our prototype laser-heated microfurnace (LHF-I) (Smith et al., 2007 [1]; Smith et al., 2010 [2,3]; Yang et al., 2014 [4]), we have designed the LHF-II to have the following features: (i) it has a small reactor volume of 0.25 mL allowing us to completely graphitise carbon dioxide samples containing as little as 2 μg of C, (ii) it can operate over a large pressure range (0–3 bar) and so has the capacity to graphitise CO{sub 2} samples containing up to 100 μg of C; (iii) it is compact, with three valves integrated into the microfurnace body, (iv) it is compatible with our new miniaturised conventional graphitisation furnaces (MCF), also designed for small samples, and shares a common vacuum system. Early tests have shown that the extraneous carbon added during graphitisation in each LHF-II is of the order of 0.05 μg, assuming 100 pMC activity, similar to that of the prototype unit. We use a ‘budget’ fibre packaged array for the diode laser with custom built focusing optics. The use of a new infrared (IR) thermometer with a short focal length has allowed us to decrease the height of the light-proof safety enclosure. These innovations have produced a cheaper and more compact device. As with the LHF-I, feedback control of the catalyst temperature and logging of the reaction parameters is managed by a LabVIEW interface.
Basic distribution free identification tests for small size samples of environmental data
Energy Technology Data Exchange (ETDEWEB)
Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente
1998-01-01
Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.
Sampling, testing and modeling particle size distribution in urban catch basins.
Garofalo, G; Carbone, M; Piro, P
2014-01-01
The study analyzed the particle size distribution of particulate matter (PM) retained in two catch basins located, respectively, near a parking lot and a traffic intersection with common high levels of traffic activity. Also, the treatment performance of a filter medium was evaluated by laboratory testing. The experimental treatment results and the field data were then used as inputs to a numerical model which described on a qualitative basis the hydrological response of the two catchments draining into each catch basin, respectively, and the quality of treatment provided by the filter during the measured rainfall. The results show that PM concentrations were on average around 300 mg/L (parking lot site) and 400 mg/L (road site) for the 10 rainfall-runoff events observed. PM with a particle diameter of model showed that a catch basin with a filter unit can remove 30 to 40% of the PM load depending on the storm characteristics.
DEFF Research Database (Denmark)
Picchini, Umberto; Forman, Julie Lyng
2016-01-01
a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...
A "Scientific Diversity" Intervention to Reduce Gender Bias in a Sample of Life Scientists.
Moss-Racusin, Corinne A; van der Toorn, Jojanneke; Dovidio, John F; Brescoll, Victoria L; Graham, Mark J; Handelsman, Jo
2016-01-01
Mounting experimental evidence suggests that subtle gender biases favoring men contribute to the underrepresentation of women in science, technology, engineering, and mathematics (STEM), including many subfields of the life sciences. However, there are relatively few evaluations of diversity interventions designed to reduce gender biases within the STEM community. Because gender biases distort the meritocratic evaluation and advancement of students, interventions targeting instructors' biases are particularly needed. We evaluated one such intervention, a workshop called "Scientific Diversity" that was consistent with an established framework guiding the development of diversity interventions designed to reduce biases and was administered to a sample of life science instructors (N = 126) at several sessions of the National Academies Summer Institute for Undergraduate Education held nationwide. Evidence emerged indicating the efficacy of the "Scientific Diversity" workshop, such that participants were more aware of gender bias, expressed less gender bias, and were more willing to engage in actions to reduce gender bias 2 weeks after participating in the intervention compared with 2 weeks before the intervention. Implications for diversity interventions aimed at reducing gender bias and broadening the participation of women in the life sciences are discussed. © 2016 C. A. Moss-Racusin et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Eichhorn, Tanja; Rauscher, Sabine; Hammer, Caroline; Gröger, Marion; Fischer, Michael B; Weber, Viktoria
2016-10-01
Endothelial activation with excessive recruitment and adhesion of immune cells plays a central role in the progression of sepsis. We established a microfluidic system to study the activation of human umbilical vein endothelial cells by conditioned medium containing plasma from lipopolysaccharide-stimulated whole blood or from septic blood and to investigate the effect of adsorption of inflammatory mediators on endothelial activation. Treatment of stimulated whole blood with polystyrene-divinylbenzene-based cytokine adsorbents (average pore sizes 15 or 30 nm) prior to passage over the endothelial layer resulted in significantly reduced endothelial cytokine and chemokine release, plasminogen activator inhibitor-1 secretion, adhesion molecule expression, and in diminished monocyte adhesion. Plasma samples from sepsis patients differed substantially in their potential to induce endothelial activation and monocyte adhesion despite their almost identical interleukin-6 and tumor necrosis factor-alpha levels. Pre-incubation of the plasma samples with a polystyrene-divinylbenzene-based adsorbent (30 nm average pore size) reduced endothelial intercellular adhesion molecule-1 expression to baseline levels, resulting in significantly diminished monocyte adhesion. Our data support the potential of porous polystyrene-divinylbenzene-based adsorbents to reduce endothelial activation under septic conditions by depletion of a broad range of inflammatory mediators.
International Nuclear Information System (INIS)
Berger, J.; Doubek, N.; Jammet, G.; Aigner, H.; Bagliano, G.; Donohue, D.; Kuhn, E.
1994-02-01
Specialized procedures have been implemented for the sampling of Pu-containing materials such as Pu nitrate, oxide or mixed oxide in States which have not yet approved type B(U) shipment containers for the air-shipment of gram-sized quantities of Pu. In such cases, it it necessary to prepare samples for shipment which contain only milligram quantities of Pu dried from solution in penicillin vials. Potential problems due to flaking-off during shipment could affect the recovery of Pu at the analytical laboratory. Therefore, a series of tests was performed with synthetic Pu nitrated, and mixed U, Pu nitrated samples to test the effectiveness of the evaporation and recovery procedures. Results of these tests as well as experience with actual inspection samples are presented, showing conclusively that the existing procedures are satisfactory. (author). 11 refs, 6 figs, 8 tabs
Sex determination by tooth size in a sample of Greek population.
Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C
2014-08-01
Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.
Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi
2017-12-01
Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.
Energy Technology Data Exchange (ETDEWEB)
Faye, C.B.; Amodeo, T.; Fréjafon, E. [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France); Delepine-Gilon, N. [Institut des Sciences Analytiques, 5 rue de la Doua, 69100 Villeurbanne (France); Dutouquet, C., E-mail: christophe.dutouquet@ineris.fr [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France)
2014-01-01
Pollution of water is a matter of concern all over the earth. Particles are known to play an important role in the transportation of pollutants in this medium. In addition, the emergence of new materials such as NOAA (Nano-Objects, their Aggregates and their Agglomerates) emphasizes the need to develop adapted instruments for their detection. Surveillance of pollutants in particulate form in waste waters in industries involved in nanoparticle manufacturing and processing is a telling example of possible applications of such instrumental development. The LIBS (laser-induced breakdown spectroscopy) technique coupled with the liquid jet as sampling mode for suspensions was deemed as a potential candidate for on-line and real time monitoring. With the final aim in view to obtain the best detection limits, the interaction of nanosecond laser pulses with the liquid jet was examined. The evolution of the volume sampled by laser pulses was estimated as a function of the laser energy applying conditional analysis when analyzing a suspension of micrometric-sized particles of borosilicate glass. An estimation of the sampled depth was made. Along with the estimation of the sampled volume, the evolution of the SNR (signal to noise ratio) as a function of the laser energy was investigated as well. Eventually, the laser energy and the corresponding fluence optimizing both the sampling volume and the SNR were determined. The obtained results highlight intrinsic limitations of the liquid jet sampling mode when using 532 nm nanosecond laser pulses with suspensions. - Highlights: • Micrometric-sized particles in suspensions are analyzed using LIBS and a liquid jet. • The evolution of the sampling volume is estimated as a function of laser energy. • The sampling volume happens to saturate beyond a certain laser fluence. • Its value was found much lower than the beam diameter times the jet thickness. • Particles proved not to be entirely vaporized.
Fraley, R. Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159
Energy Technology Data Exchange (ETDEWEB)
Wang, H.; Zhang, G.; Hui, G.; Li, Y.; Hu, Y.; Zhao, Z.
2016-07-01
Aim of study: Neighborhood-based stand spatial structure parameters can quantify and characterize forest spatial structure effectively. How these neighborhood-based structure parameters are influenced by the selection of different numbers of nearest-neighbor trees is unclear, and there is some disagreement in the literature regarding the appropriate number of nearest-neighbor trees to sample around reference trees. Understanding how to efficiently characterize forest structure is critical for forest management. Area of study: Multi-species uneven-aged forests of Northern China. Material and methods: We simulated stands with different spatial structural characteristics and systematically compared their structure parameters when two to eight neighboring trees were selected. Main results: Results showed that values of uniform angle index calculated in the same stand were different with different sizes of structure unit. When tree species and sizes were completely randomly interspersed, different numbers of neighbors had little influence on mingling and dominance indices. Changes of mingling or dominance indices caused by different numbers of neighbors occurred when the tree species or size classes were not randomly interspersed and their changing characteristics can be detected according to the spatial arrangement patterns of tree species and sizes. Research highlights: The number of neighboring trees selected for analyzing stand spatial structure parameters should be fixed. We proposed that the four-tree structure unit is the best compromise between sampling accuracy and costs for practical forest management. (Author)
A reduced estimate of the number of kilometre-sized near-Earth asteroids.
Rabinowitz, D; Helin, E; Lawrence, K; Pravdo, S
2000-01-13
Near-Earth asteroids are small (diameters Earth (they come within 1.3 AU of the Sun). Most have a chance of approximately 0.5% of colliding with the Earth in the next million years. The total number of such bodies with diameters > 1 km has been estimated to be in the range 1,000-2,000, which translates to an approximately 1% chance of a catastrophic collision with the Earth in the next millennium. These numbers are, however, poorly constrained because of the limitations of previous searches using photographic plates. (One kilometre is below the size of a body whose impact on the Earth would produce global effects.) Here we report an analysis of our survey for near-Earth asteroids that uses improved detection technologies. We find that the total number of asteroids with diameters > 1 km is about half the earlier estimates. At the current rate of discovery of near-Earth asteroids, 90% will probably have been detected within the next 20 years.
International Nuclear Information System (INIS)
Tai, Bee-Choo; Grundy, Richard; Machin, David
2011-01-01
Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.
Directory of Open Access Journals (Sweden)
Finch Stephen J
2005-04-01
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing
2017-01-01
Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.
RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes
Directory of Open Access Journals (Sweden)
Danny J. Kelly
2005-01-01
Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai
2015-09-16
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.
2015-01-01
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Ding, W K; Shah, N P
2009-08-01
This study investigated 2 different homogenization techniques for reducing the size of calcium alginate beads during the microencapsulation process of 8 probiotic bacteria strains, namely, Lactobacillus rhamnosus, L. salivarius, L. plantarum, L. acidophilus, L. paracasei, Bifidobacterium longum, B. lactis type Bi-04, and B. lactis type Bi-07. Two different homogenization techniques were used, namely, ultra-turrax benchtop homogenizer and Microfluidics microfluidizer. Various settings on the homogenization equipment were studied such as the number of passes, speed (rpm), duration (min), and pressure (psi). The traditional mixing method using a magnetic stirrer was used as a control. The size of microcapsules resulting from the homogenization technique, and the various settings were measured using a light microscope and a stage micrometer. The smallest capsules measuring (31.2 microm) were created with the microfluidizer using 26 passes at 1200 psi for 40 min. The greatest loss in viability of 3.21 log CFU/mL was observed when using the ultra-turrax benchtop homogenizer with a speed of 1300 rpm for 5 min. Overall, both homogenization techniques reduced capsule sizes; however, homogenization settings at high rpm also greatly reduced the viability of probiotic organisms.
Kikuchi, Takashi; Gittins, John
2009-08-15
It is necessary for the calculation of sample size to achieve the best balance between the cost of a clinical trial and the possible benefits from a new treatment. Gittins and Pezeshk developed an innovative (behavioral Bayes) approach, which assumes that the number of users is an increasing function of the difference in performance between the new treatment and the standard treatment. The better a new treatment, the more the number of patients who want to switch to it. The optimal sample size is calculated in this framework. This BeBay approach takes account of three decision-makers, a pharmaceutical company, the health authority and medical advisers. Kikuchi, Pezeshk and Gittins generalized this approach by introducing a logistic benefit function, and by extending to the more usual unpaired case, and with unknown variance. The expected net benefit in this model is based on the efficacy of the new drug but does not take account of the incidence of adverse reactions. The present paper extends the model to include the costs of treating adverse reactions and focuses on societal cost-effectiveness as the criterion for determining sample size. The main application is likely to be to phase III clinical trials, for which the primary outcome is to compare the costs and benefits of a new drug with a standard drug in relation to national health-care. Copyright 2009 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Jamshid Jamali
2017-01-01
Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
The impact of using reduced capacity baskets on cask fleet size and cask fleet mix
International Nuclear Information System (INIS)
Joy, D.S.; Johnson, P.E.; Andress, D.A.
1993-01-01
The Civilian Radioactive Waste Management System transportation system will encounter a wide range of spent fuel characteristics. Since the Initiative I casks are being designed to transport 10-year-old fuel with a burnup of 35,000 MWd/MTU, there is a good likelihood that a number of the cask shipments will need to be derated in order to meet the Nuclear Regulatory Commission radiation guidelines. This report discusses the impact of cask derating by using reduced-capacity baskets. Cask derating, while enhancing the ability to move spent fuel with a wider range of age and burnup characteristics, increases the number of shipments; the amount of equipment (cask bodies, baskets, etc.); and the number of visits to both shipping and receiving sites required to transport a specific amount of spent fuel
The impact of using reduced-capacity baskets on cask fleet size and cask fleet mix
International Nuclear Information System (INIS)
Joy, D.S.; Johnson, P.E.; Andress, D.A.
1993-01-01
The Civilian Radioactive Waste Management System transportation system will encounter a wide range of spent fuel characteristics. Since the Initiative I casks are being designed to transport 10-year-old fuel with a burnup of 35,000 MWd/MTU, there is a good likelihood that a number of the cask shipments will need to be derated in order to meet the Nuclear Regulatory Commission radiation guidelines. This report discusses the impact of cask derating by using reduced-capacity baskets. Cask derating, while enhancing the ability to move spent fuel with a wider range of age and burnup characteristics, increases the number of shipments; the amount of equipment (cask bodies, baskets, etc.); and the number of visits to both shipping and receiving sites required to transport a specific amount of spent fuel
International Nuclear Information System (INIS)
Lotey, Gurmeet Singh; Verma, N. K.
2012-01-01
Pure and Gd-doped BiFeO 3 nanoparticles have been synthesized by sol–gel method. The significant effects of size and Gd-doping on structural, electrical, and magnetic properties have been investigated. X-ray diffraction study reveals that the pure BiFeO 3 nanoparticles possess rhombohedral structure, but with 10% Gd-doping complete structural transformation from rhombohedral to orthorhombic has been observed. The particle size of pure and Gd-doped BiFeO 3 nanoparticles, calculated using Transmission electron microscopy, has been found to be in the range 25–15 nm. Pure and Gd-doped BiFeO 3 nanoparticles show ferromagnetic character, and the magnetization increases with decrease in particle size and increase in doping concentration. Scanning electron microscopy study reveals that grain size decreases with increase in Gd concentration. Well-saturated polarization versus electric field loop is observed for the doped samples. Leakage current density decreases by four orders by doping Gd in BiFeO 3 . The incorporation of Gd in BiFeO 3 enhances spin as well as electric polarization at room temperature. The possible origin of enhancement in these properties has been explained on the basis of dopant and its concentration, phase purity, small particle, and grain size.
International Nuclear Information System (INIS)
Zamorani, E.; Blanchard, H.
1987-01-01
Important parameters for the characterization of cement specimens are mechanical properties and porosity. This work is carried out at the Ispra Establishment of the Joint Research Centre in the scope of the Radioactive Waste Management programme. A commercial Mercury Intrusion Porosimeter was modified in an attempt to improve the performance of the instrument and to provide fast processing of the recorded values: pressure-volume of pores. The dead volume of the instrument was reduced and the possibility of leakage from the moving parts eliminated. In addition, the modification allows an improvement of data acquisition thus increasing data accuracy and reproducibility. In order to test the improved performance of the modified instrument, physical characterizations of cement forms were carried out. Experimental procedures and results are reported
International Nuclear Information System (INIS)
Xing, Ling-Bao; Zhang, Jing-Li; Zhang, Juan; Hou, Shu-Fen; Zhou, Jin; Si, Weijiang; Cui, Hongyou; Zhuo, Shuping
2015-01-01
Graphical abstract: Three-dimensional porous reduced graphene hydrogels with tunable pore size distribution are prepared by using thiourea dioxide in GO suspension with ammonia. - Highlights: • Three-dimensional reduced graphene hydrogels (RGHs) were prepared. • Thiourea dioxide was used as reducing agent with ammonia. • RGHs showed tunable pore size distribution by thiourea dioxide. • RGHs exhibited relatively good electrochemical properties in supercapacitor. - Abstract: In present work, we demonstrate a rapid and easy approach to fabricate three-dimensional (3D) reduced graphene hydrogels (RGHs) by using thiourea dioxide as reducing agents in an aqueous solution of graphene oxide (GO) with ammonia. The transformation of GO suspension to the hydrogels can be confirmed by X-ray powder diffraction, Raman spectroscopy, and Fourier transform infrared spectroscopy. The hierarchical porosity, structure and surface chemical properties can be demonstrated by N 2 sorption experiments, scanning electron microscopy and X-ray photoelectron spectroscopy. With adding different amounts of thiourea dioxide, the obtained RGHs behave different degree of reduction, controlled specific surface area and pore size distribution, and unlike performances in supercapacitors. Benefiting from well-defined and cross-linked 3D porous network architectures, the supercapacitors based on the RGHs in KOH electrolyte exhibited a high specific capacitance of 258.6, 167.3 and 198.3 F g −1 at 0.1 A g −1 for RGHs-1, RGHs-2 and RGHs-5, respectively. Furthermore, this capacitance also showed good electrochemical stability and a high degree of reversibility in the repetitive charge/discharge cycling test
Growth regulators in reducing the size of orchid Fire-of-Star for commercialization in vase
Directory of Open Access Journals (Sweden)
Patricia Reiners Carvalho
2016-05-01
Full Text Available Fire-of-star (Epidendrum radicans Pav. ex Lindl. is a terrestrial orchid, native to Brazil, tussocks with leafy stems, always with many adventitious roots, releasing its long inflorescence with about 1.0 m from the apex of the stem, showing great potential in floriculture, but long flowering stem complicates their marketing vase. The objective of this study was to evaluate the effect of paclobutrazol (PBZ and mepiquat chloride (CLM the reduction of the size of the orchid E. radicans. Plants with an average height of 15 cm were cultivated in a greenhouse with 50% shading. The growth regulators used were PBZ at doses of 0; 5; 10; 15 and 20 mg L-1, and the CLM at doses of 0; 1; 2; 3; 4 and 5 mg L-1. The frequency of application was fortnightly, totaling ten applications. The experiment was installed on a randomized complete blocks, one block to the PBZ with 5 treatments and 10 replications and another block to the CLM, with 6 treatments and 10 replications. Data were submitted to analysis of variance at 5% probability and significance when seen performed regression analysis. The variables evaluated were number shoots, plant height (cm, number of flower stems and leaf area. The results indicated that E. radicans treated with 5 mg L-1 PBZ were 50% lower in height than the control plants. When CLM treated with a dose of 1 mg L-1 plants were 25% lower in height than the control plants, maintaining its aesthetic characteristics suitable for marketing in vases. Growth regulators in the applied doses did not affect the number of shoots and flower stems. PBZ treated plants had 50% of their leaf area compared to control while those treated with CLM doses remained with the same average leaf area of control.
Directory of Open Access Journals (Sweden)
V. Indira
2015-03-01
Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.
Fung, Tak; Keenan, Kevin
2014-01-01
The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
Directory of Open Access Journals (Sweden)
Tak Fung
Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
TAMURA, Tetsuo; NAKAMURA, Hiroshi; SATO, Say; SEKI, Makoto; NISHIKI, Hideto
2014-01-01
ABSTRACT This study proposed a modified procedure, using a small balloon catheter (SB catheter, 45 ml), for reducing bladder damage in cows. Holstein cows and the following catheters were prepared: smaller balloon catheter (XSB catheter; 30 ml), SB catheter and standard balloon catheter (NB catheter; 70 ml, as the commonly used, standard size). In experiment 1, each cow was catheterized. The occurrence of catheter-associated hematuria (greater than 50 RBC/HPF) was lower in the SB catheter group (0.0%, n=7) than in the NB catheter group (71.4%, n=7; P<0.05). In experiment 2, general veterinary parameters, urine pH, body temperature and blood values in cows were not affected before or after insertion of SB catheters (n=6). The incidence of urinary tract infection (UTI) was 3.0% per catheterized day (n=22). In experiment 3, feeding profiles, daily excretion of urinary nitrogen (P<0.05) and rate from nitrogen intake in urine (P<0.01), were higher with use of the SB catheter (n=13) than with the use of the vulva urine cup (n=18), indicating that using the SB catheter can provide accurate nutritional data. From this study, we concluded that when using an SB catheter, the following results occur; reduction in bladder damage without any veterinary risks and accuracy in regard to feeding parameters, suggesting this modified procedure using an SB catheter is a useful means of daily urine collection. PMID:24561376
The design of high-temperature thermal conductivity measurements apparatus for thin sample size
Directory of Open Access Journals (Sweden)
Hadi Syamsul
2017-01-01
Full Text Available This study presents the designing, constructing and validating processes of thermal conductivity apparatus using steady-state heat-transfer techniques with the capability of testing a material at high temperatures. This design is an improvement from ASTM D5470 standard where meter-bars with the equal cross-sectional area were used to extrapolate surface temperature and measure heat transfer across a sample. There were two meter-bars in apparatus where each was placed three thermocouples. This Apparatus using a heater with a power of 1,000 watts, and cooling water to stable condition. The pressure applied was 3.4 MPa at the cross-sectional area of 113.09 mm2 meter-bar and thermal grease to minimized interfacial thermal contact resistance. To determine the performance, the validating process proceeded by comparing the results with thermal conductivity obtained by THB 500 made by LINSEIS. The tests showed the thermal conductivity of the stainless steel and bronze are 15.28 Wm-1K-1 and 38.01 Wm-1K-1 with a difference of test apparatus THB 500 are −2.55% and 2.49%. Furthermore, this apparatus has the capability to measure the thermal conductivity of the material to a temperature of 400°C where the results for the thermal conductivity of stainless steel is 19.21 Wm-1K-1 and the difference was 7.93%.
Optimal sample size of signs for classification of radiational and oily soils
International Nuclear Information System (INIS)
Babayev, M.P.; Iskenderov, S.M.; Aghayev, R.A.
2012-01-01
Full text : This article tells about classification of radiational and oily soils that should be in essence a compact intelligence system which contains maximum information on classes of soil objects in the accepted feature space. The stored experience shows that the volume of the most informative soil signs can make up maximum 7-8 indexes. More correct approach to our opinion for a sample of the most informative (most important) indexes is the method of testing and mistakes, that is the experimental method, allowing to make use a wide experience and intuition of the researcher, or group of the researchers, engaged for many years in the field of soil science. At this operational stage of the formal device of soils classification, to say more concrete, the assessment section of selfdescriptiveness of soil signs of this formal device, in our opinion, is purely mathematized and in some cases even not reflect the true picture. In this case it will be calculated 21 pair of correlative elements between the selected soil signs as a measure of the linear communication. The volume of the correlative row will be equal to 6, as the increase in volume of the correlative row can sharply increase the volume calculation. Pertinently to note that, it is the first time an attempt is made to create correlative matrixes of the most important signs of radiation and oily soils
Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.
2012-01-01
The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.
Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.
2015-01-01
Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.
International Nuclear Information System (INIS)
Akram, M.; Aftab, F.
2016-01-01
In the present study, fruits (drupes) were collected from Changa Manga Forest Plus Trees (CMF-PT), Changa Manga Forest Teak Stand (CMF-TS) and Punjab University Botanical Gardens (PUBG) and categorized into very large (= 17 mm dia.), large (12-16 mm dia.), medium (9-11 mm dia.) or small (6-8 mm dia.) fruit size grades. Fresh water as well as mechanical scarification and stratification were tested for breaking seed dormancy. Viability status of seeds was estimated by cutting test, X-rays and In vitro seed germination. Out of 2595 fruits from CMF-PT, 500 fruits were of very large grade. This fruit category also had highest individual fruit weight (0.58 g) with more number of 4-seeded fruits (5.29 percent) and fair germination potential (35.32 percent). Generally, most of the fruits were 1-seeded irrespective of size grades and sampling sites. Fresh water scarification had strong effect on germination (44.30 percent) as compared to mechanical scarification and cold stratification after 40 days of sowing. Similarly, sampling sites and fruit size grades also had significant influence on germination. Highest germination (82.33 percent) was obtained on MS (Murashige and Skoog) agar-solidified medium as compared to Woody Plant Medium (WPM) (69.22 percent). Seedlings from all the media were transferred to ex vitro conditions in the greenhouse and achieved highest survival (28.6 percent) from seedlings previously raised on MS agar-solidified medium after 40 days. There was an association between the studied parameters of teak seeds and the sampling sites and fruit size. (author)
Reduced population size does not affect the mating strategy of a vulnerable and endemic seabird
Nava, Cristina; Neves, Verónica C.; Andris, Malvina; Dubois, Marie-Pierre; Jarne, Philippe; Bolton, Mark; Bried, Joël
2017-12-01
Bottleneck episodes may occur in small and isolated animal populations, which may result in decreased genetic diversity and increased inbreeding, but also in mating strategy adjustment. This was evaluated in the vulnerable and socially monogamous Monteiro's Storm-petrel Hydrobates monteiroi, a seabird endemic to the Azores archipelago which has suffered a dramatic population decline since the XVth century. To do this, we conducted a genetic study (18 microsatellite markers) in the population from Praia islet, which has been monitored over 16 years. We found no evidence that a genetic bottleneck was associated with this demographic decline. Monteiro's Storm-petrels paired randomly with respect to genetic relatedness and body measurements. Pair fecundity was unrelated to genetic relatedness between partners. We detected only two cases of extra-pair parentage associated with an extra-pair copulation (out of 71 offspring). Unsuccessful pairs were most likely to divorce the next year, but genetic relatedness between pair mates and pair breeding experience did not influence divorce. Divorce enabled individuals to improve their reproductive performances after re-mating only when the new partner was experienced. Re-pairing with an experienced partner occurred more frequently when divorcees changed nest than when they retained their nest. This study shows that even in strongly reduced populations, genetic diversity can be maintained, inbreeding does not necessarily occur, and random pairing is not risky in terms of pair lifetime reproductive success. Given, however, that we found no clear phenotypic mate choice criteria, the part played by non-morphological traits should be assessed more accurately in order to better understand seabird mating strategies.
Neeson, Thomas M; Van Rijn, Itai; Mandelik, Yael
2013-07-01
Ecologists and paleontologists often rely on higher taxon surrogates instead of complete inventories of biological diversity. Despite their intrinsic appeal, the performance of these surrogates has been markedly inconsistent across empirical studies, to the extent that there is no consensus on appropriate taxonomic resolution (i.e., whether genus- or family-level categories are more appropriate) or their overall usefulness. A framework linking the reliability of higher taxon surrogates to biogeographic setting would allow for the interpretation of previously published work and provide some needed guidance regarding the actual application of these surrogates in biodiversity assessments, conservation planning, and the interpretation of the fossil record. We developed a mathematical model to show how taxonomic diversity, community structure, and sampling effort together affect three measures of higher taxon performance: the correlation between species and higher taxon richness, the relative shapes and asymptotes of species and higher taxon accumulation curves, and the efficiency of higher taxa in a complementarity-based reserve-selection algorithm. In our model, higher taxon surrogates performed well in communities in which a few common species were most abundant, and less well in communities with many equally abundant species. Furthermore, higher taxon surrogates performed well when there was a small mean and variance in the number of species per higher taxa. We also show that empirically measured species-higher-taxon correlations can be partly spurious (i.e., a mathematical artifact), except when the species accumulation curve has reached an asymptote. This particular result is of considerable practical interest given the widespread use of rapid survey methods in biodiversity assessment and the application of higher taxon methods to taxa in which species accumulation curves rarely reach an asymptote, e.g., insects.
International Nuclear Information System (INIS)
Keyser, R.M.; Twomey, T.R.; Sangsingkeow, P.
1998-01-01
For 25 yr, coaxial germanium detector performance has been specified using the methods and values specified in Ref. 1. These specifications are the full-width at half-maximum (FWHM), FW.1M, FW.02M, peak-to-Compton ratio, and relative efficiency. All of these measurements are made with a 60 Co source 25 cm from the cryostat endcap and centered on the axis of the detector. These measurements are easy to reproduce, both because they are simple to set up and use a common source. These standard tests have been useful in guiding the user to an appropriate detector choice for the intended measurement. Most users of germanium gamma-ray detectors do not make measurements in this simple geometry. Germanium detector manufacturers have worked over the years to make detectors with better resolution, better peak-to-Compton ratios, and higher efficiency--but all based on measurements using the IEEE standard. Advances in germanium crystal growth techniques have made it relatively easy to provide detector elements of different shapes and sizes. Many of these different shapes and sizes can give better results for a specific application than other shapes and sizes. But, the detector specifications must be changed to correspond to the actual application. Both the expected values and the actual parameters to be specified should be changed. In many cases, detection efficiency, peak shape, and minimum detectable limit for a particular detector/sample combination are valuable specifications of detector performance. For other situations, other parameters are important, such as peak shape as a function of count rate. In this work, different sample geometries were considered. The results show the variation in efficiency with energy for all of these sample and detector geometries. The point source at 25 cm from the endcap measurement allows the results to be compared with the currently given IEEE criteria. The best sample/detector configuration for a specific measurement requires more and
Directory of Open Access Journals (Sweden)
J. Rodríguez-Castelán
2017-01-01
Full Text Available Ovarian failure is related to dyslipidemias and inflammation, as well as to hypertrophy and dysfunction of the visceral adipose tissue (VAT. Although hypothyroidism has been associated with obesity, dyslipidemias, and inflammation in humans and animals, its influence on the characteristics of ovarian follicles in adulthood is scarcely known. Control and hypothyroid rabbits were used to analyze the ovarian follicles, expression of aromatase in the ovary, serum concentration of lipids, leptin, and uric acid, size of adipocytes, and infiltration of macrophages in the periovarian VAT. Hypothyroidism did not affect the percentage of functional or atretic follicles. However, it reduced the size of primary, secondary, and tertiary follicles considered as large and the expression of aromatase in the ovary. This effect was associated with high serum concentrations of total cholesterol and low-density lipoprotein cholesterol (LDL-C. In addition, hypothyroidism induced hypertrophy of adipocytes and a major infiltration of CD68+ macrophages into the periovarian VAT. Our results suggest that the reduced size of ovarian follicles promoted by hypothyroidism could be associated with dyslipidemias, hypertrophy, and inflammation of the periovarian VAT. Present findings may be useful to understand the influence of hypothyroidism in the ovary function in adulthood.
Rodríguez-Castelán, J; Méndez-Tepepa, M; Carrillo-Portillo, Y; Anaya-Hernández, A; Rodríguez-Antolín, J; Zambrano, E; Castelán, F; Cuevas-Romero, E
2017-01-01
Ovarian failure is related to dyslipidemias and inflammation, as well as to hypertrophy and dysfunction of the visceral adipose tissue (VAT). Although hypothyroidism has been associated with obesity, dyslipidemias, and inflammation in humans and animals, its influence on the characteristics of ovarian follicles in adulthood is scarcely known. Control and hypothyroid rabbits were used to analyze the ovarian follicles, expression of aromatase in the ovary, serum concentration of lipids, leptin, and uric acid, size of adipocytes, and infiltration of macrophages in the periovarian VAT. Hypothyroidism did not affect the percentage of functional or atretic follicles. However, it reduced the size of primary, secondary, and tertiary follicles considered as large and the expression of aromatase in the ovary. This effect was associated with high serum concentrations of total cholesterol and low-density lipoprotein cholesterol (LDL-C). In addition, hypothyroidism induced hypertrophy of adipocytes and a major infiltration of CD68+ macrophages into the periovarian VAT. Our results suggest that the reduced size of ovarian follicles promoted by hypothyroidism could be associated with dyslipidemias, hypertrophy, and inflammation of the periovarian VAT. Present findings may be useful to understand the influence of hypothyroidism in the ovary function in adulthood.
Energy Technology Data Exchange (ETDEWEB)
Cahill, T.A.; Wilkinson, K. [Univ. of California, Davis, CA (United States); Schnell, R. [National Center for Atmospheric Research, Boulder, CO (United States)
1992-09-20
Analyses are reported for eight aerosol samples taken from the National Center for Atmospheric Research Electra typically 200 to 250 km downwind of Kuwait between May 19 and June 1, 1991. Aerosols were separated into fine (D{sub p} < 2.5 {mu}m) and coarse (2.5 < D{sub p} 10 {mu}m) particles for optical, gravimetric, X ray and nuclear analyses, yielding information on the morphology, mass, and composition of aerosols downwind of Kuwait. The mass of coarse aerosols ranged between 60 and 1971 {mu}g/m{sup 3} and, while dominated by soil derived aerosols, contained considerable content of sulfates and salt (NaCl) and soot in the form of fluffy agglomerates. The mass of fine aerosols varied between 70 and 785 {mu}g/m{sup 3}, of which about 70% was accounted for via compositional analyses performed in vacuum. While most components varied greatly from flight to flight, organic matter and fine soils each accounted for about 1/4 of the fine mass, while salt and sulfates contributed about 10% and 7%, respectively. The Cl/S ratios were remarkably constant, 2.4 {+-} 1.2 for coarse particles and 2.0 {+-} 0.2 for fine particles, with one flight deleted in each case. Vanadium, when observed, ranged from 9 to 27 ng/m{sup 3}, while nickel ranged from 5 to 25 ng/m{sup 3}. In fact, fine sulfates, vanadium, and nickel occurred in levels typical of Los Angeles, California, during summer 1986. The V/Ni ratio, 1.7 {+-} 0.4, was very similar to the ratios measured in fine particles from combusted Kuwaiti oil, 1.4 {+-} 0.9. Bromine, copper, zinc, and arsenic/lead were also observed at levels between 2 and 190 ng/m{sup 3}. The presence of massive amounts of fine, typically alkaline soils in the Kuwaiti smoke plumes significantly modified their behavior and probably mitigated their impacts, locally and globally. 16 refs., 1 fig., 3 tabs.
Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife
2016-01-01
Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (10(5)/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (10(6)/ml) and SRB (10(8)/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows
Directory of Open Access Journals (Sweden)
Sebastian Wilhelm
2015-12-01
Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.
Directory of Open Access Journals (Sweden)
Shinjiro Kobayashi
2014-11-01
Full Text Available Interval appendectomy (IA for appendiceal abscesses is useful for avoiding extended surgery and preventing postoperative complications. However, IA has problems in that it takes time before an abscess is reduced in size in some cases and in that elective surgery may result in a delay in treatment in patients with a malignant tumor of the appendix. In order to rule out malignancy, we performed colonoscopy on three patients with an appendiceal abscess that did not decrease in size 5 or more days after IA. After malignancy had been ruled out by examination of the area of the appendiceal orifice, the appendiceal orifice was compressed with a colonoscope, and a catheter was inserted through the orifice. Then, drainage of pus was observed from the appendiceal orifice into the cecal lumen. Computed tomography performed 3 days after colonoscopy revealed a marked reduction in abscess size in all patients. No endoscopy-related complication was noted. Colonoscopy in patients with an appendiceal abscess may not only differentiate malignant tumors, but also accelerate reduction in abscess size.
UTP reduces infarct size and improves mice heart function after myocardial infarct via P2Y2 receptor
DEFF Research Database (Denmark)
Cohen, A; Shainberg, Asher; Hochhauser, E
2011-01-01
Pyrimidine nucleotides are signaling molecules, which activate G protein-coupled membrane receptors of the P2Y family. P2Y(2) and P2Y(4) receptors are part of the P2Y family, which is composed of 8 subtypes that have been cloned and functionally defined. We have previously found that uridine-5......'-triphosphate (UTP) reduces infarct size and improves cardiac function following myocardial infarct (MI). The aim of the present study was to determine the role of P2Y(2) receptor in cardiac protection following MI using knockout (KO) mice, in vivo and wild type (WT) for controls. In both experimental groups...... used (WT and P2Y(2)(-/-) receptor KO mice) there were 3 subgroups: sham, MI, and MI+UTP. 24h post MI we performed echocardiography and measured infarct size using triphenyl tetrazolium chloride (TTC) staining on all mice. Fractional shortening (FS) was higher in WT UTP-treated mice than the MI group...
International Nuclear Information System (INIS)
Liu Zhang; Xu Weicheng; Fang Jianzhang; Xu Xiaoxin; Wu Shuxing; Zhu Ximiao; Chen Zehua
2012-01-01
Highlights: ► RGO/BiOI nanocomposites were synthesized by a reverse microemulsion method. ► Quantum sized BiOI nanoparticles can be obtained by this approach. ► Ascorbic acid was used as a reducing agent to reduce GO and seemed to be effective. ► RGO/BiOI presented outstanding visible-light-induced photocatalytic performance. ► Possible photocatalytic mechanism was proposed based on the experimental studies. - Abstract: Herein, a reverse microemulsion route was developed to synthesize bismuth oxyiodide (BiOI) nanocrystals and reduced graphene oxide (RGO) nanocomposites as a highly efficient photocatalyst, and both the formation of BiOI and the reduction of RGO were achieved in situ in microemulsions simultaneously at low temperature (60 °C). The uniform nanocrystal size and structure were indicated by XRD, TEM, and the reduction of GO by ascorbic acid was evidenced by FTIR, XPS, and Raman spectra techniques. The enhanced photoactivity of RGO/BiOI nanocomposites under visible light was attributed to improved light absorption and efficient charge separation and transportation.
Energy Technology Data Exchange (ETDEWEB)
Liu Zhang, E-mail: liuzhang0126@126.com [School of Chemistry and Environment, South China Normal University, Guangzhou 510006 (China); Xu Weicheng [School of Chemistry and Environment, South China Normal University, Guangzhou 510006 (China); Fang Jianzhang, E-mail: fangjzh@scnu.edu.cn [School of Chemistry and Environment, South China Normal University, Guangzhou 510006 (China); Xu Xiaoxin; Wu Shuxing; Zhu Ximiao; Chen Zehua [School of Chemistry and Environment, South China Normal University, Guangzhou 510006 (China)
2012-10-15
Highlights: Black-Right-Pointing-Pointer RGO/BiOI nanocomposites were synthesized by a reverse microemulsion method. Black-Right-Pointing-Pointer Quantum sized BiOI nanoparticles can be obtained by this approach. Black-Right-Pointing-Pointer Ascorbic acid was used as a reducing agent to reduce GO and seemed to be effective. Black-Right-Pointing-Pointer RGO/BiOI presented outstanding visible-light-induced photocatalytic performance. Black-Right-Pointing-Pointer Possible photocatalytic mechanism was proposed based on the experimental studies. - Abstract: Herein, a reverse microemulsion route was developed to synthesize bismuth oxyiodide (BiOI) nanocrystals and reduced graphene oxide (RGO) nanocomposites as a highly efficient photocatalyst, and both the formation of BiOI and the reduction of RGO were achieved in situ in microemulsions simultaneously at low temperature (60 Degree-Sign C). The uniform nanocrystal size and structure were indicated by XRD, TEM, and the reduction of GO by ascorbic acid was evidenced by FTIR, XPS, and Raman spectra techniques. The enhanced photoactivity of RGO/BiOI nanocomposites under visible light was attributed to improved light absorption and efficient charge separation and transportation.
Major- and trace elements in grain size fractions of the Apollo-17 core of the drilled sample 74001
International Nuclear Information System (INIS)
Kraehenbuehl, U.; Gunten, H.R. von; Jost, D.; Meyer, G.; Wegmueller, F.
1980-01-01
Two layers of a drill sample were examined, one from a depth of 38 cm and the other from 58 cm depth. Neutron activation analysis was used for one group of elements, and radiochemical analysis for another. Over a range of grain size from 36 to 450 μm, the trace elements U, Co, and La were found to uniformly distributed, as was iron. The top layer consistently showed a 5-8% higher content. The volatile trace elements Ge and Cd were found to be enriched in the smaller grain sizes. This contradicts previous assumptions of an enrichment of the more volatile elements in top layers owing to more rapid cooling of volcanic eruptions. (R.S.)
Directory of Open Access Journals (Sweden)
Carmen Methner
Full Text Available Stimulation of the nitric oxide (NO--soluble guanylate (sGC--protein kinase G (PKG pathway confers protection against acute ischaemia/reperfusion injury, but more chronic effects in reducing post-myocardial infarction (MI heart failure are less defined. The aim of this study was to not only determine whether the sGC stimulator riociguat reduces infarct size but also whether it protects against the development of post-MI heart failure.Mice were subjected to 30 min ischaemia via ligation of the left main coronary artery to induce MI and either placebo or riociguat (1.2 µmol/l were given as a bolus 5 min before and 5 min after onset of reperfusion. After 24 hours, both, late gadolinium-enhanced magnetic resonance imaging (LGE-MRI and (18F-FDG-positron emission tomography (PET were performed to determine infarct size. In the riociguat-treated mice, the resulting infarct size was smaller (8.5 ± 2.5% of total LV mass vs. 21.8% ± 1.7%. in controls, p = 0.005 and LV systolic function analysed by MRI was better preserved (60.1% ± 3.4% of preischaemic vs. 44.2% ± 3.1% in controls, p = 0.005. After 28 days, LV systolic function by echocardiography treated group was still better preserved (63.5% ± 3.2% vs. 48.2% ± 2.2% in control, p = 0.004.Taken together, mice treated acutely at the onset of reperfusion with the sGC stimulator riociguat have smaller infarct size and better long-term preservation of LV systolic function. These findings suggest that sGC stimulation during reperfusion therapy may be a powerful therapeutic treatment strategy for preventing post-MI heart failure.
Directory of Open Access Journals (Sweden)
Michael B.C. Khoo
2013-11-01
Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.
Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.
2017-10-01
The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Directory of Open Access Journals (Sweden)
Daniel Vasiliu
Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Directory of Open Access Journals (Sweden)
Sunil Kumar C
2014-01-01
Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Energy Technology Data Exchange (ETDEWEB)
Aguado, Andrea; Galán, María; Zhenyukh, Olha; Wiggers, Giulia A.; Roque, Fernanda R. [Departamento de Farmacología, Facultad de Medicina, Universidad Autónoma de Madrid, Instituto de Investigación Hospital Universitario La Paz (IdiPAZ), 28029, Madrid (Spain); Redondo, Santiago [Departamento de Farmacología, Facultad de Medicina, Universidad Complutense, 28040, Madrid (Spain); Peçanha, Franck [Departamento de Farmacología, Facultad de Medicina, Universidad Autónoma de Madrid, Instituto de Investigación Hospital Universitario La Paz (IdiPAZ), 28029, Madrid (Spain); Martín, Angela [Departamento de Bioquímica, Fisiología y Genética Molecular, Universidad Rey Juan Carlos, 28922, Alcorcón (Spain); Fortuño, Ana [Área de Ciencias Cardiovasculares, Centro de Investigación Médica Aplicada, Universidad de Navarra, 31008, Pamplona (Spain); Cachofeiro, Victoria [Departamento de Fisiología, Facultad de Medicina, Universidad Complutense, 28040, Madrid (Spain); Tejerina, Teresa [Departamento de Farmacología, Facultad de Medicina, Universidad Complutense, 28040, Madrid (Spain); Salaices, Mercedes, E-mail: mercedes.salaices@uam.es [Departamento de Farmacología, Facultad de Medicina, Universidad Autónoma de Madrid, Instituto de Investigación Hospital Universitario La Paz (IdiPAZ), 28029, Madrid (Spain); and others
2013-04-15
MAPK activation, oxidative stress and COX-2 expression. ► Inhibition of MAPK reduces HgCl{sub 2}-induced oxidative stress and COX-2 expression. ► Inhibition of MAPK, oxidative stress and COX-2 restores the altered cell proliferation and size.
Bekele, Yonas; Graham, Rebecka Lantto; Soeria-Atmadja, Sandra; Nasi, Aikaterini; Zazzi, Maurizio; Vicenti, Ilaria; Naver, Lars; Nilsson, Anna; Chiodi, Francesca
2017-01-01
During anti-retroviral therapy (ART) HIV-1 persists in cellular reservoirs, mostly represented by CD4+ memory T cells. Several approaches are currently being undertaken to develop a cure for HIV-1 infection through elimination (or reduction) of these reservoirs. Few studies have so far been conducted to assess the possibility of reducing the size of HIV-1 reservoirs through vaccination in virologically controlled HIV-1-infected children. We recently conducted a vaccination study with a combined hepatitis A virus (HAV) and hepatitis B virus (HBV) vaccine in 22 HIV-1-infected children. We assessed the size of the virus reservoir, measured as total HIV-1 DNA copies in blood cells, pre- and postvaccination. In addition, we investigated by immunostaining whether the frequencies of CD4+ and CD8+ T cells and parameters of immune activation and proliferation on these cells were modulated by vaccination. At 1 month from the last vaccination dose, we found that 20 out of 22 children mounted a serological response to HBV; a majority of children had antibodies against HAV at baseline. The number of HIV-1 DNA copies in blood at 1 month postvaccination was reduced in comparison to baseline although this reduction was not statistically significant. A significant reduction of HIV-1 DNA copies in blood following vaccination was found in 12 children. The frequencies of CD4+ (naïve, effector memory) and CD8+ (central memory) T-cell subpopulations changed following vaccinations and a reduction in the activation and proliferation pattern of these cells was also noticed. Multivariate linear regression analysis revealed that the frequency of CD8+ effector memory T cells prior to vaccination was strongly predictive of the reduction of HIV-1 DNA copies in blood following vaccination of the 22 HIV-1-infected children. The results of this study suggest a beneficial effect of vaccination to reduce the size of virus reservoir in HIV-1-infected children receiving ART. A reduced frequency of
Directory of Open Access Journals (Sweden)
Yonas Bekele
2018-01-01
Full Text Available During anti-retroviral therapy (ART HIV-1 persists in cellular reservoirs, mostly represented by CD4+ memory T cells. Several approaches are currently being undertaken to develop a cure for HIV-1 infection through elimination (or reduction of these reservoirs. Few studies have so far been conducted to assess the possibility of reducing the size of HIV-1 reservoirs through vaccination in virologically controlled HIV-1-infected children. We recently conducted a vaccination study with a combined hepatitis A virus (HAV and hepatitis B virus (HBV vaccine in 22 HIV-1-infected children. We assessed the size of the virus reservoir, measured as total HIV-1 DNA copies in blood cells, pre- and postvaccination. In addition, we investigated by immunostaining whether the frequencies of CD4+ and CD8+ T cells and parameters of immune activation and proliferation on these cells were modulated by vaccination. At 1 month from the last vaccination dose, we found that 20 out of 22 children mounted a serological response to HBV; a majority of children had antibodies against HAV at baseline. The number of HIV-1 DNA copies in blood at 1 month postvaccination was reduced in comparison to baseline although this reduction was not statistically significant. A significant reduction of HIV-1 DNA copies in blood following vaccination was found in 12 children. The frequencies of CD4+ (naïve, effector memory and CD8+ (central memory T-cell subpopulations changed following vaccinations and a reduction in the activation and proliferation pattern of these cells was also noticed. Multivariate linear regression analysis revealed that the frequency of CD8+ effector memory T cells prior to vaccination was strongly predictive of the reduction of HIV-1 DNA copies in blood following vaccination of the 22 HIV-1-infected children. The results of this study suggest a beneficial effect of vaccination to reduce the size of virus reservoir in HIV-1-infected children receiving ART. A reduced
Bekele, Yonas; Graham, Rebecka Lantto; Soeria-Atmadja, Sandra; Nasi, Aikaterini; Zazzi, Maurizio; Vicenti, Ilaria; Naver, Lars; Nilsson, Anna; Chiodi, Francesca
2018-01-01
During anti-retroviral therapy (ART) HIV-1 persists in cellular reservoirs, mostly represented by CD4+ memory T cells. Several approaches are currently being undertaken to develop a cure for HIV-1 infection through elimination (or reduction) of these reservoirs. Few studies have so far been conducted to assess the possibility of reducing the size of HIV-1 reservoirs through vaccination in virologically controlled HIV-1-infected children. We recently conducted a vaccination study with a combined hepatitis A virus (HAV) and hepatitis B virus (HBV) vaccine in 22 HIV-1-infected children. We assessed the size of the virus reservoir, measured as total HIV-1 DNA copies in blood cells, pre- and postvaccination. In addition, we investigated by immunostaining whether the frequencies of CD4+ and CD8+ T cells and parameters of immune activation and proliferation on these cells were modulated by vaccination. At 1 month from the last vaccination dose, we found that 20 out of 22 children mounted a serological response to HBV; a majority of children had antibodies against HAV at baseline. The number of HIV-1 DNA copies in blood at 1 month postvaccination was reduced in comparison to baseline although this reduction was not statistically significant. A significant reduction of HIV-1 DNA copies in blood following vaccination was found in 12 children. The frequencies of CD4+ (naïve, effector memory) and CD8+ (central memory) T-cell subpopulations changed following vaccinations and a reduction in the activation and proliferation pattern of these cells was also noticed. Multivariate linear regression analysis revealed that the frequency of CD8+ effector memory T cells prior to vaccination was strongly predictive of the reduction of HIV-1 DNA copies in blood following vaccination of the 22 HIV-1-infected children. The results of this study suggest a beneficial effect of vaccination to reduce the size of virus reservoir in HIV-1-infected children receiving ART. A reduced frequency of
Energy Technology Data Exchange (ETDEWEB)
Water, Tara A. van de, E-mail: t.a.van.de.water@rt.umcg.nl [Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen (Netherlands); Lomax, Antony J. [Centre for Proton Therapy, Paul Scherrer Institute, Villigen-PSI (Switzerland); Bijl, Hendrik P.; Schilstra, Cornelis [Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen (Netherlands); Hug, Eugen B. [Centre for Proton Therapy, Paul Scherrer Institute, Villigen-PSI (Switzerland); Langendijk, Johannes A. [Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen (Netherlands)
2012-02-01
Purpose: To investigate whether intensity-modulated proton therapy with a reduced spot size (rsIMPT) could further reduce the parotid and submandibular gland dose compared with previously calculated IMPT plans with a larger spot size. In addition, it was investigated whether the obtained dose reductions would theoretically translate into a reduction of normal tissue complication probabilities (NTCPs). Methods: Ten patients with N0 oropharyngeal cancer were included in a comparative treatment planning study. Both IMPT plans delivered simultaneously 70 Gy to the boost planning target volume (PTV) and 54 Gy to the elective nodal PTV. IMPT and rsIMPT used identical three-field beam arrangements. In the IMPT plans, the parotid and submandibular salivary glands were spared as much as possible. rsIMPT plans used identical dose-volume objectives for the parotid glands as those used by the IMPT plans, whereas the objectives for the submandibular glands were tightened further. NTCPs were calculated for salivary dysfunction and xerostomia. Results: Target coverage was similar for both IMPT techniques, whereas rsIMPT clearly improved target conformity. The mean doses in the parotid glands and submandibular glands were significantly lower for three-field rsIMPT (14.7 Gy and 46.9 Gy, respectively) than for three-field IMPT (16.8 Gy and 54.6 Gy, respectively). Hence, rsIMPT significantly reduced the NTCP of patient-rated xerostomia and parotid and contralateral submandibular salivary flow dysfunction (27%, 17%, and 43% respectively) compared with IMPT (39%, 20%, and 79%, respectively). In addition, mean dose values in the sublingual glands, the soft palate and oral cavity were also decreased. Obtained dose and NTCP reductions varied per patient. Conclusions: rsIMPT improved sparing of the salivary glands and reduced NTCP for xerostomia and parotid and submandibular salivary dysfunction, while maintaining similar target coverage results. It is expected that rsIMPT improves quality
Energy Technology Data Exchange (ETDEWEB)
Stretz, L.A.; Bautista, R.G.
1976-01-01
The high-temperature heat content of liquid praseodymium was measured experimentally by the levitation calorimetry technique. The samples, ranging in size from 0.5 to 1.5 g, were simultaneously levitated and heated by a radiofrequency generator in an argon-helium mixture prior to being dropped into a conventional copper block drop calorimeter. Corrections were made for the convection and radiation losses during the fall of the sample from the levitation chamber into the calorimeter. The praseodymium data, from 1460 to 2289K, were fitted by the following equation where the indicated errors represent the average deviation of the experimental value from the value predicted by the equation: H/sub T/ - H/sub 298/./sub 15/ = (41.57 +- 0.29) (T - 1208) + (41733 +- 197) J/mol. (auth)
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...
Honda, Tsuyoshi; Fujimoto, Kazuteru; Miyao, Yuji; Koga, Hidenobu; Hirata, Yoshihiro
2012-09-01
The aim of this study was to investigate the risk factors for access site-related complications after transradial coronary angiography (CAG) or percutaneous coronary intervention (PCI). Transradial PCI has been shown to reduce access site-related bleeding complications compared with procedures performed through a femoral approach. Although previous studies focused on risk factors for access site-related complications after a transfemoral approach or transfemoral and transradial approaches, it is uncertain which factors affect vascular complications after transradial catheterization. We enrolled 500 consecutive patients who underwent transradial CAG or PCI. We determined the incidence and risk factors for access site-related complications such as radial artery occlusion and bleeding complications. Age, sheath size, the dose of heparin and the frequency of PCI (vs. CAG) were significantly greater in patients with than without bleeding complications. However, body mass index (BMI) was significantly lower in patients with than without bleeding complications. Sheath size was significantly higher and the frequency of statin use was significantly lower in patients with than without radial artery occlusion. Multiple logistic analysis revealed that sheath size [odds ratio (OR) 5.5; P strategy that could prevent radial artery occlusion after transradial procedures.
Guillot, Jacques; Bensignor, Emmanuel; Jankowski, François; Seewald, Wolfgang; Chermette, René; Steffan, Jean
2003-06-01
The objective of this study was to compare the efficacy of oral ketoconazole and terbinafine for reducing population sizes of Malassezia yeasts on canine skin. Twenty-one Basset Hounds were randomised in three groups of seven according to Malassezia populations. Dogs in the first group were treated by oral administration of ketoconazole (Ketofungol) 200 mg, Janssen-Cilag) at 10 mg x kg-1, every 24 h with food, for 3 weeks. Dogs in the second group were treated by oral administration of terbinafine (Lamisil) 250 mg, Novartis) at 30 mg x kg-1, every 24 h with food, for 3 weeks. The seven remaining dogs were used as co