WorldWideScience

Sample records for stratification sample size

  1. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  2. Effect of sample stratification on dairy GWAS results

    Directory of Open Access Journals (Sweden)

    Ma Li

    2012-10-01

    Full Text Available Abstract Background Artificial insemination and genetic selection are major factors contributing to population stratification in dairy cattle. In this study, we analyzed the effect of sample stratification and the effect of stratification correction on results of a dairy genome-wide association study (GWAS. Three methods for stratification correction were used: the efficient mixed-model association expedited (EMMAX method accounting for correlation among all individuals, a generalized least squares (GLS method based on half-sib intraclass correlation, and a principal component analysis (PCA approach. Results Historical pedigree data revealed that the 1,654 contemporary cows in the GWAS were all related when traced through approximately 10–15 generations of ancestors. Genome and phenotype stratifications had a striking overlap with the half-sib structure. A large elite half-sib family of cows contributed to the detection of favorable alleles that had low frequencies in the general population and high frequencies in the elite cows and contributed to the detection of X chromosome effects. All three methods for stratification correction reduced the number of significant effects. EMMAX method had the most severe reduction in the number of significant effects, and the PCA method using 20 principal components and GLS had similar significance levels. Removal of the elite cows from the analysis without using stratification correction removed many effects that were also removed by the three methods for stratification correction, indicating that stratification correction could have removed some true effects due to the elite cows. SNP effects with good consensus between different methods and effect size distributions from USDA’s Holstein genomic evaluation included the DGAT1-NIBP region of BTA14 for production traits, a SNP 45kb upstream from PIGY on BTA6 and two SNPs in NIBP on BTA14 for protein percentage. However, most of these consensus effects had

  3. Ionic Size Effects: Generalized Boltzmann Distributions, Counterion Stratification, and Modified Debye Length.

    Science.gov (United States)

    Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao

    2013-10-01

    Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes.

  4. Principal Stratification in sample selection problems with non normal error terms

    DEFF Research Database (Denmark)

    Rocci, Roberto; Mellace, Giovanni

    The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true distribut...... an application to the Job Corps training program....

  5. TIMSS 2011 User Guide for the International Database. Supplement 4: TIMSS 2011 Sampling Stratification Information

    Science.gov (United States)

    Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed.

    2013-01-01

    This supplement contains documentation on the explicit and implicit stratification variables included in the TIMSS 2011 data files. The explicit strata are smaller sampling frames, created from the national sampling frames, from which national samples of schools were drawn. The implicit strata are nested within the explicit strata, and were used…

  6. Effect of optimum stratification on sampling with varying probabilities under proportional allocation

    Directory of Open Access Journals (Sweden)

    Syed Ejaz Husain Rizvi

    2007-10-01

    Full Text Available The problem of optimum stratification on an auxiliary variable when the units from different strata are selected with probability proportional to the value of auxiliary variable (PPSWR was considered by Singh (1975 for univariate case. In this paper we have extended the same problem, for proportional allocation, when two variates are under study. A cum. 3 R3(x rule for obtaining approximately optimum strata boundaries has been provided. It has been shown theoretically as well as empirically that the use of stratification has inverse effect on the relative efficiency of PPSWR as compared to unstratified PPSWR method when proportional method of allocation is envisaged. Further comparison showed that with increase in number of strata the stratified simple random sampling is equally efficient as PPSWR.

  7. Correction of population stratification in large multi-ethnic association studies.

    Directory of Open Access Journals (Sweden)

    David Serre

    2008-01-01

    Full Text Available The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification.We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification.Our results highlight the importance of carefully addressing population stratification and of carefully "cleaning" the sample prior to analyses to obtain stronger signals of association and to avoid spurious results.

  8. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  9. Fruit size and sampling sites affect on dormancy, viability and germination of teak (Tectona grandis L.) seeds

    International Nuclear Information System (INIS)

    Akram, M.; Aftab, F.

    2016-01-01

    In the present study, fruits (drupes) were collected from Changa Manga Forest Plus Trees (CMF-PT), Changa Manga Forest Teak Stand (CMF-TS) and Punjab University Botanical Gardens (PUBG) and categorized into very large (= 17 mm dia.), large (12-16 mm dia.), medium (9-11 mm dia.) or small (6-8 mm dia.) fruit size grades. Fresh water as well as mechanical scarification and stratification were tested for breaking seed dormancy. Viability status of seeds was estimated by cutting test, X-rays and In vitro seed germination. Out of 2595 fruits from CMF-PT, 500 fruits were of very large grade. This fruit category also had highest individual fruit weight (0.58 g) with more number of 4-seeded fruits (5.29 percent) and fair germination potential (35.32 percent). Generally, most of the fruits were 1-seeded irrespective of size grades and sampling sites. Fresh water scarification had strong effect on germination (44.30 percent) as compared to mechanical scarification and cold stratification after 40 days of sowing. Similarly, sampling sites and fruit size grades also had significant influence on germination. Highest germination (82.33 percent) was obtained on MS (Murashige and Skoog) agar-solidified medium as compared to Woody Plant Medium (WPM) (69.22 percent). Seedlings from all the media were transferred to ex vitro conditions in the greenhouse and achieved highest survival (28.6 percent) from seedlings previously raised on MS agar-solidified medium after 40 days. There was an association between the studied parameters of teak seeds and the sampling sites and fruit size. (author)

  10. Stratification in Business and Agriculture Surveys with R

    Directory of Open Access Journals (Sweden)

    Marco Ballin

    2016-06-01

    Full Text Available Usually sample surveys on enterprises and farms adopt a one stage stratified sampling design. In practice the sampling frame is divided in non-overlapping strata and simple random sampling is carried out independently in each stratum. Stratification allows for reduction of the sampling error and permits to derive accurate estimates. Stratified sampling requires a number of decisions strictly related: (i how to stratify the population and how many strata to consider; (ii the size of the whole sample and corresponding partitioning among the strata (so called allocation. This paper will deal mainly with the problem (i and will show how to tackle it in the R environment using packages already available on the CRAN.

  11. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  12. Optimum strata boundaries and sample sizes in health surveys using auxiliary variables.

    Science.gov (United States)

    Reddy, Karuna Garan; Khan, Mohammad G M; Khan, Sabiha

    2018-01-01

    Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results.

  13. Randomization in clinical trials: stratification or minimization? The HERMES free simulation software.

    Science.gov (United States)

    Fron Chabouis, Hélène; Chabouis, Francis; Gillaizeau, Florence; Durieux, Pierre; Chatellier, Gilles; Ruse, N Dorin; Attal, Jean-Pierre

    2014-01-01

    Operative clinical trials are often small and open-label. Randomization is therefore very important. Stratification and minimization are two randomization options in such trials. The first aim of this study was to compare stratification and minimization in terms of predictability and balance in order to help investigators choose the most appropriate allocation method. Our second aim was to evaluate the influence of various parameters on the performance of these techniques. The created software generated patients according to chosen trial parameters (e.g., number of important prognostic factors, number of operators or centers, etc.) and computed predictability and balance indicators for several stratification and minimization methods over a given number of simulations. Block size and proportion of random allocations could be chosen. A reference trial was chosen (50 patients, 1 prognostic factor, and 2 operators) and eight other trials derived from this reference trial were modeled. Predictability and balance indicators were calculated from 10,000 simulations per trial. Minimization performed better with complex trials (e.g., smaller sample size, increasing number of prognostic factors, and operators); stratification imbalance increased when the number of strata increased. An inverse correlation between imbalance and predictability was observed. A compromise between predictability and imbalance still has to be found by the investigator but our software (HERMES) gives concrete reasons for choosing between stratification and minimization; it can be downloaded free of charge. This software will help investigators choose the appropriate randomization method in future two-arm trials.

  14. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  15. Forage fiber effects on particle size reduction, ruminal stratification, and selective retention in heifers fed highly digestible grass/clover silages.

    Science.gov (United States)

    Schulze, A K S; Weisbjerg, M R; Storm, A C; Nørgaard, P

    2014-06-01

    The objective of this study was to investigate the effect of NDF content in highly digestible grass/clover silage on particle size reduction, ruminal stratification, and selective retention in dairy heifers. The reduction in particle size from feed to feces was evaluated and related to feed intake, chewing activity, and apparent digestibility. Four grass/clover harvests (Mixtures of Lolium perenne, Trifolium pratense, and Trifolium repens) were performed from early May to late August at different maturities, at different regrowth stages, and with different clover proportions, resulting in silages with NDF contents of 312, 360, 371, and 446 g/kg DM, respectively, and decreasing NDF digestibility with greater NDF content. Four rumen-fistulated dairy heifers were fed silage at 90% of ad libitum level as the only feed source in a 4 × 4 Latin square design. Silage, ingested feed boluses, medial and ventral ruminal digesta, and feces samples were washed with neutral detergent in nylon bags of 10-μm pore size, freeze dried, and divided into small (1 mm) particles by dry-sieving. Chewing activity, rumen pool size, and apparent digestibility were measured. Intake of NDF increased linearly from 2.3 to 2.8 kg/d with greater NDF content of forages (P = 0.01), but silages were exposed to similar eating time (P = 0.55) and rumination time per kg NDF (P = 0.35). No linear effect of NDF content was found on proportion of LP in ingested feed boluses (P = 0.31), medial rumen digesta (P = 0.95), ventral rumen digesta (P = 0.84), and feces (P = 0.09). Greater proportions of DM (P ruminal digesta compared with ventral rumen, and differences in DM proportion increased with greater NDF content (P = 0.02). Particle size distributions were similar for digesta from the medial and ventral rumen regardless of NDF content of the silages (P > 0.13). The LP proportion was >30% of particles in the ventral and medial rumen, whereas in the feces, the LP proportion was content of the silages

  16. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  17. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  18. Optimized endogenous post-stratification in forest inventories

    Science.gov (United States)

    Paul L. Patterson

    2012-01-01

    An example of endogenous post-stratification is the use of remote sensing data with a sample of ground data to build a logistic regression model to predict the probability that a plot is forested and using the predicted probabilities to form categories for post-stratification. An optimized endogenous post-stratified estimator of the proportion of forest has been...

  19. Cold stratification, but not stratification in salinity, enhances seedling ...

    African Journals Online (AJOL)

    Cold stratification, but not stratification in salinity, enhances seedling growth of wheat under salt treatment. L Wang, HL Wang, CH Yin, CY Tian. Abstract. Cold stratification was reported to release seed dormancy and enhance plant tolerance to salt stress. Experiments were conducted to test the hypothesis that cold ...

  20. Predictive features of CT for risk stratifications in patients with primary gastrointestinal stromal tumour

    International Nuclear Information System (INIS)

    Zhou, Cuiping; Zhang, Xiang; Duan, Xiaohui; Hu, Huijun; Wang, Dongye; Shen, Jun

    2016-01-01

    To determine the predictive CT imaging features for risk stratifications in patients with primary gastrointestinal stromal tumours (GISTs). One hundred and twenty-nine patients with histologically confirmed primary GISTs (diameter >2 cm) were enrolled. CT imaging features were reviewed. Tumour risk stratifications were determined according to the 2008 NIH criteria where GISTs were classified into four categories according to the tumour size, location, mitosis count, and tumour rupture. The association between risk stratifications and CT features was analyzed using univariate analysis, followed by multinomial logistic regression and receiver operating characteristic (ROC) curve analysis. CT imaging features including tumour margin, size, shape, tumour growth pattern, direct organ invasion, necrosis, enlarged vessels feeding or draining the mass (EVFDM), lymphadenopathy, and contrast enhancement pattern were associated with the risk stratifications, as determined by univariate analysis (P < 0.05). Only lesion size, growth pattern and EVFDM remained independent risk factors in multinomial logistic regression analysis (OR = 3.480-100.384). ROC curve analysis showed that the area under curve of the obtained multinomial logistic regression model was 0.806 (95 % CI: 0.727-0.885). CT features including lesion size, tumour growth pattern, and EVFDM were predictors of the risk stratifications for GIST. (orig.)

  1. Rumen content stratification in the giraffe (Giraffa camelopardalis).

    Science.gov (United States)

    Sauer, Cathrine; Clauss, Marcus; Bertelsen, Mads F; Weisbjerg, Martin R; Lund, Peter

    2017-01-01

    Ruminants differ in the degree of rumen content stratification, with 'cattle-types' (i.e., the grazing and intermediate feeding ruminants) having stratified content, whereas 'moose-types' (i.e., the browsing ruminants) have unstratified content. The feeding ecology, as well as the digestive morphophysiology of the giraffe (Giraffa camelopardalis), suggest that it is a 'moose-type' ruminant. Correspondingly, the giraffe should have an unstratified rumen content and an even rumen papillation pattern. Digesta samples were collected from along the digestive tract of 27 wild-caught giraffes kept in bomas for up to 2months, and 10 giraffes kept in zoological gardens throughout their lives. Samples were analysed for concentration of dry matter, fibre fractions, volatile fatty acids and NH 3 , as well as mean particle size and pH. There was no difference between the dorsal and ventral rumen region in any of these parameters, indicating homogenous rumen content in the giraffes. In addition to the digesta samples, samples of dorsal rumen, ventral rumen and atrium ruminis mucosa were collected and the papillary surface enlargement factor was determined, as a proxy for content stratification. The even rumen papillation pattern observed also supported the concept of an unstratified rumen content in giraffes. Zoo giraffes had a slightly more uneven papillation pattern than boma giraffes. This finding could not be matched by differences in physical characteristics of the rumen content, probably due to an influence of fasting time ante mortem on these parameters. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  3. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  4. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  5. Cold stratification, but not stratification in salinity, enhances seedling ...

    African Journals Online (AJOL)

    use

    2011-10-26

    Oct 26, 2011 ... Cold stratification was reported to release seed dormancy and enhance plant tolerance to salt stress. ... Key words: Cold stratification, salt stress, seedling emergence, ... methods used to cope with salinity, seed pre-sowing.

  6. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  7. INLET STRATIFICATION DEVICE

    DEFF Research Database (Denmark)

    2006-01-01

    An inlet stratification device (5) for a circuit circulating a fluid through a tank (1 ) and for providing and maintaining stratification of the fluid in the tank (1 ). The stratification de- vice (5) is arranged vertically in the tank (1) and comprises an inlet pipe (6) being at least partially...... formed of a flexible porous material and having an inlet (19) and outlets formed of the pores of the porous material. The stratification device (5) further comprises at least one outer pipe (7) surrounding the inlet pipe (6) in spaced relationship thereto and being at least partially formed of a porous...

  8. Revealing the timing of ocean stratification using remotely sensed ocean fronts

    Science.gov (United States)

    Miller, Peter I.; Loveday, Benjamin R.

    2017-10-01

    Stratification is of critical importance to the circulation, mixing and productivity of the ocean, and is expected to be modified by climate change. Stratification is also understood to affect the surface aggregation of pelagic fish and hence the foraging behaviour and distribution of their predators such as seabirds and cetaceans. Hence it would be prudent to monitor the stratification of the global ocean, though this is currently only possible using in situ sampling, profiling buoys or underwater autonomous vehicles. Earth observation (EO) sensors cannot directly detect stratification, but can observe surface features related to the presence of stratification, for example shelf-sea fronts that separate tidally-mixed water from seasonally stratified water. This paper describes a novel algorithm that accumulates evidence for stratification from a sequence of oceanic front maps, and discusses preliminary results in comparison with in situ data and simulations from 3D hydrodynamic models. In certain regions, this method can reveal the timing of the seasonal onset and breakdown of stratification.

  9. Probability Sampling - A Guideline for Quantitative Health Care ...

    African Journals Online (AJOL)

    A more direct definition is the method used for selecting a given ... description of the chosen population, the sampling procedure giving ... target population, precision, and stratification. The ... survey estimates, it is recommended that researchers first analyze a .... The optimum sample size has a relation to the type of planned ...

  10. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  11. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  12. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  13. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  14. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  15. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  16. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  17. Overcoming intratumoural heterogeneity for reproducible molecular risk stratification: a case study in advanced kidney cancer.

    Science.gov (United States)

    Lubbock, Alexander L R; Stewart, Grant D; O'Mahony, Fiach C; Laird, Alexander; Mullen, Peter; O'Donnell, Marie; Powles, Thomas; Harrison, David J; Overton, Ian M

    2017-06-26

    . Indeed, sample selection could change risk group assignment for 64% of patients, and prognostication with one sample per patient performed only slightly better than random expectation (median logHR = 0.109). Low grade tissue was associated with 3.5-fold greater variation in predicted risk than high grade (p = 0.044). This case study in mccRCC quantitatively demonstrates the critical importance of tumour sampling for the success of molecular biomarker studies research where ITH is a factor. The NEAT model shows promise for mccRCC prognostication and warrants follow-up in larger cohorts. Our work evidences actionable parameters to guide sample collection (tumour coverage, size, grade) to inform the development of reproducible molecular risk stratification methods.

  18. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  19. Forage fiber effects on particle size reduction, ruminal stratification, and selective retention in heifers fed highly digestible grass/clover silages

    DEFF Research Database (Denmark)

    Schulze, Anne-Katrine Skovsted; Weisbjerg, Martin Riis; Storm, Adam Christian

    2014-01-01

    The objective of this study was to investigate the effect of NDF content in highly digestible grass/clover silage on particle size reduction, ruminal stratification, and selective retention in dairy heifers. The reduction in particle size from feed to feces was evaluated and related to feed intake...... measured. Intake of NDF increased linearly from 2.3 to 2.8 kg/d with greater NDF content of forages (P = 0.01), but silages were exposed to similar eating time (P = 0.55) and rumination time per kg NDF (P = 0.35). No linear effect of NDF content was found on proportion of LP in ingested feed boluses (P = 0.......31), medial rumen digesta (P = 0.95), ventral rumen digesta (P = 0.84), and feces (P = 0.09). Greater proportions of DM (P ruminal digesta compared with ventral rumen, and differences in DM proportion increased with greater NDF content (P = 0...

  20. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  2. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  4. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  5. Stratification of zooplankton in the northwestern Indian Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Paulinose, V.T.; Gopalakrishnan, T.C.; Nair, K.K.C.; Aravindakshan, P.N.

    Study on stratification of zooplankton in the north western Indian Ocean was carried out with special reference to its relative abundance and distribution. Samples were collected using multiple plankton net, during first cruise of ORV Sagar Kanya...

  6. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  8. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  9. DEM Simulation of Particle Stratification and Segregation in Stockpile Formation

    Directory of Open Access Journals (Sweden)

    Zhang Dizhe

    2017-01-01

    Full Text Available Granular stockpiles are commonly observed in nature and industry, and their formation has been extensively investigated experimentally and mathematically in the literature. One of the striking features affecting properties of stockpiles are the internal patterns formed by the stratification and segregation processes. In this work, we conduct a numerical study based on DEM (discrete element method model to study the influencing factors and triggering mechanisms of these two phenomena. With the use of a previously developed mixing index, the effects of parameters including size ratio, injection height and mass ratio are investigated. We found that it is a void-filling mechanism that differentiates the motions of particles with different sizes. This mechanism drives the large particles to flow over the pile surface and segregate at the pile bottom, while it also pushes small particles to fill the voids between large particles, giving rise to separate layers. Consequently, this difference in motion will result in the observed stratification and segregation phenomena.

  10. Dynamic modeling of stratification for chilled water storage tank

    International Nuclear Information System (INIS)

    Osman, Kahar; Al Khaireed, Syed Muhammad Nasrul; Ariffin, Mohd Kamal; Senawi, Mohd Yusoff

    2008-01-01

    Air conditioning of buildings can be costly and energy consuming. Application of thermal energy storage (TES) reduces cost and energy consumption. The efficiency of the overall operation is affected by storage tank sizing design, which affects thermal stratification of water during charging and discharging processes in TES system. In this study, numerical simulation is used to determine the relationship between tank size and good thermal stratification. Three dimensional simulations with different tank height-to-diameter ratio (HD) and inlet Reynolds number (Re) are investigated. The effect of the number of diffuser holes is also studied. For shallow tanks (low HD) simulations, no acceptable thermocline thickness can be seen for all Re experimented. Partial mixing is observed throughout the process. Medium HD tanks simulations show good thermocline behavior and clear distinction between warm and cold water can be seen. Finally, deep tanks (high HD) show less acceptable thermocline thickness as compared to that of medium HD tanks. From this study, doubling and halving the number of diffuser holes show no significant effect on the thermocline behavior

  11. Evaluation of stratification factors and score-scales in clinical trials of treatment of clinical mastitis in dairy cows.

    Science.gov (United States)

    Hektoen, L; Ødegaard, S A; Løken, T; Larsen, S

    2004-05-01

    There is often a need to reduce sample size in clinical trials due to practical limitations and ethical considerations. Better comparability between treatment groups by use of stratification in the design, and use of continuous outcome variables in the evaluation of treatment results, are two methods that can be used in order to achieve this. In this paper the choice of stratification factors in trials of clinical mastitis in dairy cows is investigated, and two score-scales for evaluation of clinical mastitis are introduced. The outcome in 57 dairy cows suffering from clinical mastitis and included in a clinical trial comparing homeopathic treatment, placebo and a standard antibiotic treatment is investigated. The strata of various stratification factors are compared across treatments to determine which other factors influence outcome. The two score scales, measuring acute and chronic mastitis symptoms, respectively, are evaluated on their ability to differentiate between patients classified from clinical criteria as responders or non-responders to treatment. Differences were found between the strata of the factors severity of mastitis, lactation number, previous mastitis this lactation and bacteriological findings. These factors influence outcome of treatment and appear relevant as stratification factors in mastitis trials. Both score scales differentiated between responders and non-responders to treatment and were found useful for evaluation of mastitis and mastitis treatment.

  12. On some common practices of systematic sampling

    OpenAIRE

    Zhang, Li-Chun

    2006-01-01

    With permission from Statistics Sweden. The original publication is available at http://www.jos.nu/Contents/jos_online.asp Artikkelen er også utgitt i Statistisk sentralbyrås serie Særtrykk/Reprints nr 325. Systematic sampling is widely used technique in survey sampling. It is easy to execute, whether the units are to be selected with equal probability or with probabilities proportional to auxiliary sizes. It can be very efficient if one manage to achieve favourable stratification effect...

  13. Revealing the timing of ocean stratification using remotely-sensed ocean fronts: links with marine predators

    Science.gov (United States)

    Miller, P. I.; Loveday, B. R.

    2016-02-01

    Stratification is of critical importance to the mixing and productivity of the ocean, though currently it can only be measured using in situ sampling, profiling buoys or underwater autonomous vehicles. Stratification is understood to affect the surface aggregation of pelagic fish and hence the foraging behaviour and distribution of their predators such as seabirds and cetaceans. Satellite Earth observation sensors cannot directly detect stratification, but can observe surface features related to the presence of stratification, for example shelf-sea fronts that separate tidally-mixed water from seasonally stratified water. This presentation describes a novel algorithm that accumulates evidence for stratification from a sequence of oceanic front maps, and in certain regions can reveal the timing of the seasonal onset and breakdown of stratification. Initial comparisons will be made with seabird locations acquired through GPS tagging. If successful, a remotely-sensed stratification timing index would augment the ocean front metrics already developed at PML, that have been applied in over 20 journal articles relating marine predators to ocean fronts. The figure below shows a preliminary remotely-sensed 'stratification' index, for 25-31 Jul. 2010, where red indicates water with stronger evidence for stratification.

  14. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  15. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  16. On Optimum Stratification

    OpenAIRE

    M. G. M. Khan; V. D. Prasad; D. K. Rao

    2014-01-01

    In this manuscript, we discuss the problem of determining the optimum stratification of a study (or main) variable based on the auxiliary variable that follows a uniform distribution. If the stratification of survey variable is made using the auxiliary variable it may lead to substantial gains in precision of the estimates. This problem is formulated as a Nonlinear Programming Problem (NLPP), which turn out to multistage decision problem and is solved using dynamic programming technique.

  17. Effect of optimum stratification on sampling with varying probabilities under proportional allocation

    OpenAIRE

    Syed Ejaz Husain Rizvi; Jaj P. Gupta; Manoj Bhargava

    2007-01-01

    The problem of optimum stratification on an auxiliary variable when the units from different strata are selected with probability proportional to the value of auxiliary variable (PPSWR) was considered by Singh (1975) for univariate case. In this paper we have extended the same problem, for proportional allocation, when two variates are under study. A cum. 3 R3(x) rule for obtaining approximately optimum strata boundaries has been provided. It has been shown theoretically as well as empiricall...

  18. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  19. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  20. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  1. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  2. ASSESSMENT OF STELLAR STRATIFICATION IN THREE YOUNG STAR CLUSTERS IN THE LARGE MAGELLANIC CLOUD

    International Nuclear Information System (INIS)

    Gouliermis, Dimitrios A.; Rochau, Boyke; Mackey, Dougal; Xin Yu

    2010-01-01

    We present a comprehensive study of stellar stratification in young star clusters in the Large Magellanic Cloud (LMC). We apply our recently developed effective radius method for the assessment of stellar stratification on imaging data obtained with the Advanced Camera for Surveys of three young LMC clusters to characterize the phenomenon and develop a comparative scheme for its assessment in such clusters. The clusters of our sample, NGC 1983, NGC 2002, and NGC 2010, are selected on the basis of their youthfulness, and their variety in appearance, structure, stellar content, and surrounding stellar ambient. Our photometry is complete for magnitudes down to m 814 ≅ 23 mag, allowing the calculation of the structural parameters of the clusters, the estimation of their ages, and the determination of their stellar content. Our study shows that each cluster in our sample demonstrates stellar stratification in a quite different manner and at different degree from the others. Specifically, NGC 1983 shows partial segregation, with the effective radius increasing with fainter magnitudes only for the faintest stars of the cluster. Our method on NGC 2002 provides evidence of strong stellar stratification for both bright and faint stars; the cluster demonstrates the phenomenon with the highest degree in the sample. Finally, NGC 2010 is not segregated, as its bright stellar content is not centrally concentrated, the relation of effective radius to magnitude for stars of intermediate brightness is rather flat, and we find no evidence of stratification for its faintest stars. For the parameterization of the phenomenon of stellar stratification and its quantitative comparison among these clusters, we propose the slope derived from the change in the effective radius over the corresponding magnitude range as indicative parameter of the degree of stratification in the clusters. A positive value of this slope indicates mass segregation in the cluster, while a negative or zero value

  3. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  4. BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.

    Science.gov (United States)

    Khakabimamaghani, Sahand; Ester, Martin

    2016-01-01

    The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.

  5. Grain-size sorting and slope failure in experimental subaqueous grain flows

    NARCIS (Netherlands)

    Kleinhans, M.G.; Asch, Th.W.J. van

    2005-01-01

    Grain-size sorting in subaqueous grain flows of a continuous range of grain sizes is studied experimentally with three mixtures. The observed pattern is a combination of stratification and gradual segregation. The stratification is caused by kinematic sieving in the grain flow. The segregation is

  6. Molecular reclassification of Crohn's disease: a cautionary note on population stratification.

    Science.gov (United States)

    Maus, Bärbel; Jung, Camille; Mahachie John, Jestinah M; Hugot, Jean-Pierre; Génin, Emmanuelle; Van Steen, Kristel

    2013-01-01

    Complex human diseases commonly differ in their phenotypic characteristics, e.g., Crohn's disease (CD) patients are heterogeneous with regard to disease location and disease extent. The genetic susceptibility to Crohn's disease is widely acknowledged and has been demonstrated by identification of over 100 CD associated genetic loci. However, relating CD subphenotypes to disease susceptible loci has proven to be a difficult task. In this paper we discuss the use of cluster analysis on genetic markers to identify genetic-based subgroups while taking into account possible confounding by population stratification. We show that it is highly relevant to consider the confounding nature of population stratification in order to avoid that detected clusters are strongly related to population groups instead of disease-specific groups. Therefore, we explain the use of principal components to correct for population stratification while clustering affected individuals into genetic-based subgroups. The principal components are obtained using 30 ancestry informative markers (AIM), and the first two PCs are determined to discriminate between continental origins of the affected individuals. Genotypes on 51 CD associated single nucleotide polymorphisms (SNPs) are used to perform latent class analysis, hierarchical and Partitioning Around Medoids (PAM) cluster analysis within a sample of affected individuals with and without the use of principal components to adjust for population stratification. It is seen that without correction for population stratification clusters seem to be influenced by population stratification while with correction clusters are unrelated to continental origin of individuals.

  7. Molecular reclassification of Crohn's disease: a cautionary note on population stratification.

    Directory of Open Access Journals (Sweden)

    Bärbel Maus

    Full Text Available Complex human diseases commonly differ in their phenotypic characteristics, e.g., Crohn's disease (CD patients are heterogeneous with regard to disease location and disease extent. The genetic susceptibility to Crohn's disease is widely acknowledged and has been demonstrated by identification of over 100 CD associated genetic loci. However, relating CD subphenotypes to disease susceptible loci has proven to be a difficult task. In this paper we discuss the use of cluster analysis on genetic markers to identify genetic-based subgroups while taking into account possible confounding by population stratification. We show that it is highly relevant to consider the confounding nature of population stratification in order to avoid that detected clusters are strongly related to population groups instead of disease-specific groups. Therefore, we explain the use of principal components to correct for population stratification while clustering affected individuals into genetic-based subgroups. The principal components are obtained using 30 ancestry informative markers (AIM, and the first two PCs are determined to discriminate between continental origins of the affected individuals. Genotypes on 51 CD associated single nucleotide polymorphisms (SNPs are used to perform latent class analysis, hierarchical and Partitioning Around Medoids (PAM cluster analysis within a sample of affected individuals with and without the use of principal components to adjust for population stratification. It is seen that without correction for population stratification clusters seem to be influenced by population stratification while with correction clusters are unrelated to continental origin of individuals.

  8. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  9. Thermal stratification in the pressurizer

    International Nuclear Information System (INIS)

    Baik, S.J.; Lee, K.W.; Ro, T.S.

    2001-01-01

    The thermal stratification in the pressurizer due to the insurge from the hot leg to the pressurizer has been studied. The insurge flow of the cold water into the pressurizer takes place during the heatup/cooldown and the normal or abnormal transients during power operation. The pressurizer vessel can undergo significant thermal fatigue usage caused by insurges and outsurges. Two-dimensional axisymmetric transient analysis for the thermal stratification in the pressurizer is performed using the computational fluid dynamics code, FLUENT, to get the velocity and temperature distribution. Parametric study has been carried out to investigate the effect of the inlet velocity and the temperature difference between the hot leg and the pressurizer on the thermal stratification. The results show that the insurge flow of cold water into the pressurizer does not mix well with hot water, and the cold water remains only in the lower portion of the pressurizer, which leads to the thermal stratification in the pressurizer. The thermal load on the pressurizer due to the thermal stratification or the cyclic thermal transient should be examined with respect to the mechanical integrity and this study can serve the design data for the stress analysis. (authors)

  10. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  11. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  12. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  13. The Effect of an Isogrid on Cryogenic Propellant Behavior and Thermal Stratification

    Science.gov (United States)

    Oliveira, Justin; Kirk, Daniel R.; Chintalapati, Sunil; Schallhorn, Paul A.; Piquero, Jorge L.; Campbell, Mike; Chase, Sukhdeep

    2007-01-01

    All models for thermal stratification available in the presentation are derived using smooth, flat plate laminar and turbulent boundary layer models. This study examines the effect of isogrid (roughness elements) on the surface of internal tank walls to mimic the effects of weight-saving isogrid, which is located on the inside of many rocket propellant tanks. Computational Fluid Dynamics (CFD) is used to study the momentum and thermal boundary layer thickness for free convection flows over a wall with generic roughness elements. This presentation makes no mention of actual isogrid sizes or of any specific tank geometry. The magnitude of thermal stratification is compared for smooth and isogrid-lined walls.

  14. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  15. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  16. Multilayer fabric stratification pipes for solar tanks

    DEFF Research Database (Denmark)

    Andersen, Elsa; Furbo, Simon; Fan, Jianhua

    2007-01-01

    The thermal performance of solar heating systems is strongly influenced by the thermal stratification in the heat storage. The higher the degree of thermal stratification is, the higher the thermal performance of the solar heating systems. Thermal stratification in water storages can for instance...

  17. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  18. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Talent Complementarity and Organizational Stratification

    Science.gov (United States)

    Abrahamson, Mark

    1973-01-01

    Stratification within organizations as produced by the distribution of functional importance among positions is investigated. According to Stinchcombe's hypothesis from the functional theory of stratification, the rewards given to various positions are expected to be less equal when talent is complementary rather than additive. Actual differences…

  20. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  1. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  2. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  3. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  4. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  5. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  6. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  7. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  8. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  9. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  10. Thermal Stratification in Vertical Mantle Tanks

    DEFF Research Database (Denmark)

    Knudsen, Søren; Furbo, Simon

    2001-01-01

    It is well known that it is important to have a high degree of thermal stratification in the hot water storage tank to achieve a high thermal performance of SDHW systems. This study is concentrated on thermal stratification in vertical mantle tanks. Experiments based on typical operation conditions...... are carried out to investigate how the thermal stratification is affected by different placements of the mantle inlet. The heat transfer between the solar collector fluid in the mantle and the domestic water in the inner tank is analysed by CFD-simulations. Furthermore, the flow pattern in the vertical mantle...

  11. A Comparative Review of Stratification Texts and Readers

    Science.gov (United States)

    Peoples, Clayton D.

    2012-01-01

    Social stratification is a core substantive area within sociology. There are a number of textbooks and readers available on the market that deal with this central topic. In this article, I conduct a comparative review of (a) four stratification textbooks and (b) four stratification readers. (Contains 2 tables.)

  12. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Stratification studies in components of nuclear power plants

    International Nuclear Information System (INIS)

    Randorf, J.A.

    1997-01-01

    The applicability of two stratification criteria during loss-of-coolant (LOCA) conditions was studied. The first criteria was developed for addressing cold water injection-induced stratification. The second criteria applied to downcomer/cold leg junction stratification. Both criteria provided predictions consistent with measured conditions during small break loss-of-coolant tests

  14. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  15. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  16. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  17. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  18. Stratification devices

    DEFF Research Database (Denmark)

    Andersen, Elsa; Furbo, Simon

    2008-01-01

    Thermal stratification in the storage tank is extremely important in order to achieve high thermal performance of a solar heating system. High temperatures in the top of the storage tank and low temperatures in the bottom of the storage tank lead to the best operation conditions for any solar hea...

  19. Dynamo Tests for Stratification Below the Core-Mantle Boundary

    Science.gov (United States)

    Olson, P.; Landeau, M.

    2017-12-01

    Evidence from seismology, mineral physics, and core dynamics points to a layer with an overall stable stratification in the Earth's outer core, possibly thermal in origin, extending below the core-mantle boundary (CMB) for several hundred kilometers. In contrast, energetic deep mantle convection with elevated heat flux implies locally unstable thermal stratification below the CMB in places, consistent with interpretations of non-dipole geomagnetic field behavior that favor upwelling flows below the CMB. Here, we model the structure of convection and magnetic fields in the core using numerical dynamos with laterally heterogeneous boundary heat flux in order to rationalize this conflicting evidence. Strongly heterogeneous boundary heat flux generates localized convection beneath the CMB that coexists with an overall stable stratification there. Partially stratified dynamos have distinctive time average magnetic field structures. Without stratification or with stratification confined to a thin layer, the octupole component is small and the CMB magnetic field structure includes polar intensity minima. With more extensive stratification, the octupole component is large and the magnetic field structure includes intense patches or high intensity lobes in the polar regions. Comparisons with the time-averaged geomagnetic field are generally favorable for partial stratification in a thin layer but unfavorable for stratification in a thick layer beneath the CMB.

  20. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  1. Fundamental validation of simulation method for thermal stratification in upper plenum of fast reactors. Analysis of sodium experiment

    International Nuclear Information System (INIS)

    Ohno, Shuji; Ohshima, Hiroyuki; Sugahara, Akihiro; Ohki, Hiroshi

    2010-01-01

    Three-dimensional thermal-hydraulic analyses have been carried out for a sodium experiment in a relatively simple axis-symmetric geometry using a commercial CFD code in order to validate simulating methods for thermal stratification behavior in an upper plenum of sodium-cooled fast reactor. Detailed comparison between simulated results and experimental measurement has demonstrated that the code reproduced fairly well the fundamental thermal stratification behaviors such as vertical temperature gradient and upward movement of a stratification interface when utilizing high-order discretization scheme and appropriate mesh size. Furthermore, the investigation has clarified the influence of RANS type turbulence models on phenomena predictability; i.e. the standard k-ε model, the RNG k-ε model and the Reynolds Stress Model. (author)

  2. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  4. Long time durability tests of fabric inlet stratification pipes

    DEFF Research Database (Denmark)

    Andersen, Elsa; Furbo, Simon

    2008-01-01

    and that this destroys the capability of building up thermal stratification for the fabric inlet stratification pipe. The results also show that although dirt, algae etc. are deposited in the fabric pipes in the space heating tank, the capability of the fabric inlet stratifiers to build up thermal stratification...

  5. A reconceptualization of age stratification in China.

    Science.gov (United States)

    Yin, P; Lai, K H

    1983-09-01

    Using the concepts of age stratification theory--age effect, cohort effect, and subcohort differences--this paper provides a new perspective on age stratification in China. Currently, the literature suggests that the status of elderly people declined after the Communist Revolution and will further decline with modernization. We discuss the problems with this perspective and argue, instead, that the status of elderly adults did not decline for the majority of the aged during the Maoist years. Rather, the most important change in the age stratification system during the Maoist years was the change in the criterion of age stratification--from age differences to cohort and subcohort differences. Furthermore, the subcohort of elderly adults who suffered the most status decline during the Maoist years--the bourgeoisie--may actually enjoy an increase in status with the recent modernization impetus. Research suggestions from this new perspective are discussed.

  6. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  7. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  8. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  9. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Various manifestations of stratification phenomenon during intravenous cholangiography

    Energy Technology Data Exchange (ETDEWEB)

    Tada, S; Nanjo, M; Kino, M; Sekiya, T; Harada, J; Kuroda, T; Anno, I [Jikei Univ., Tokyo (Japan). School of Medicine

    1979-07-01

    A classification has been made of various types of stratification phenomenon during intravenous cholangiography. The stage of gallbladder opacification in the recumbent position has been classified as (I) mottled, (II) dendritic, (III) ring-like, and (IV) homogeneous. 'Dendritic' type of stratification phenomenon has never been reported in the literature to our knowledge. At 20 min following infusion of contrast material homogeneous opacification of the gallbladder was noticed in only 14% of patients. The others fell into types I, II or III of stratification phenomenon. In contrast, 87% of the opacified gallbladders were homogeneous on the after fatty meal film. It is therefore mandatory for diagnosis that either a 24 h film or a fatty meal film be taken to avoid the stratification phenomenon.

  11. Various manifestations of stratification phenomenon during intravenous cholangiography

    International Nuclear Information System (INIS)

    Tada, S.; Nanjo, M.; Kino, M.; Sekiya, T.; Harada, J.; Kuroda, T.; Anno, I.

    1979-01-01

    A classification has been made of various types of stratification phenomenon during intravenous cholangiography. The stage of gallbladder opacification in the recumbent position has been classified as (I) mottled, (II) dendritic, (III) ring-like, and (IV) homogeneous. 'Dendritic' type of stratification phenomenon has never been reported in the literature to our knowledge. At 20 min following infusion of contrast material homogeneous opacification of the gallbladder was noticed in only 14% of patients. The others fell into types I, II or III of stratification phenomenon. In contrast, 87% of the opacified gallbladders were homogeneous on the after fatty meal film. It is therefore mandatory for diagnosis that either a 24 h film or a fatty meal film be taken to avoid the stratification phenomenon. (author)

  12. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  13. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  14. Modeling Multimodal Stratification

    DEFF Research Database (Denmark)

    Boeriis, Morten

    2017-01-01

    . The article outlines a theoretical experiment exploring how an alternative way of modeling stratification and instantiation may raise some interesting ideas on the concepts of realization dynamics, system-instance, and the different contexts of the semiotic text. This is elaborated in a discussion of how...

  15. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  16. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  17. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  18. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  19. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  20. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  1. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  2. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  3. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  4. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Image analysis to measure sorting and stratification applied to sand-gravel experiments

    OpenAIRE

    Orrú, C.

    2016-01-01

    The main objective of this project is to develop new measuring techniques for providing detailed data on sediment sorting suitable for sand-gravel laboratory experiments. Such data will be of aid in obtaining new insights on sorting mechanisms and improving prediction capabilities of morphodynamic models. Two measuring techniques have been developed. The first technique is aimed at measuring the size stratification of a sand-gravel deposit through combining image analysis and a sediment remov...

  6. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  7. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  8. Fuel and combustion stratification study of Partially Premixed Combustion

    OpenAIRE

    Izadi Najafabadi, M.; Dam, N.; Somers, B.; Johansson, B.

    2016-01-01

    Relatively high levels of stratification is one of the main advantages of Partially Premixed Combustion (PPC) over the Homogeneous Charge Compression Ignition (HCCI) concept. Fuel stratification smoothens heat release and improves controllability of this kind of combustion. However, the lack of a clear definition of “fuel and combustion stratifications” is obvious in literature. Hence, it is difficult to compare stratification levels of different PPC strategies or other combustion concepts. T...

  9. Studies of thermal stratification in water pool

    International Nuclear Information System (INIS)

    Verma, P.K.; Chandraker, D.K.; Nayak, A.K.; Vijayan, P.K.

    2015-01-01

    Large water pools are used as a heat sink for various cooling systems used in industry. In context of advance nuclear reactors like AHWR, it is used as ultimate heat sink for passive systems for decay heat removal and containment cooling. This system incorporates heat exchangers submerged in the large water pool. However, heat transfer by natural convection in pool poses a problem of thermal stratification. Due to thermal stratification hot layers of water accumulate over the relatively cold one. The heat transfer performance of heat exchanger gets deteriorated as a hot fluid envelops it. In the nuclear reactors, the walls of the pool are made of concrete and it may subject to high temperature due to thermal stratification which is not desirable. In this paper, a concept of employing shrouds around the heat source is studied. These shrouds provide a bulk flow in the water pool, thereby facilitating mixing of hot and cold fluid, which eliminate stratification. The concept has been applied to the a scaled model of Gravity Driven Water Pool (GDWP) of AHWR in which Isolation Condensers (IC) tubes are submerged for decay heat removal of AHWR using ICS and thermal stratification phenomenon was predicted with and without shrouds. To demonstrate the adequacy of the effectiveness of shroud arrangement and to validate the simulation methodology of RELAP5/Mod3.2, experiments has been conducted on a scaled model of the pool with and without shroud. (author)

  10. Social Stratification in the Workplace in Nigeria

    Directory of Open Access Journals (Sweden)

    Emmanuel Obukovwo Okaka

    2017-06-01

    Full Text Available Nigerian society in pre-colonial era was stratified according to royalty, military might, wealth and religious hierarchy as the case may be. But with the advent of paid employment, the social stratification shifted from a traditional format to one outlined with Western societies. The argument put forward is that social class in modern time has only been re-defined, thereby giving Nigeria a unique social stratification with a strong traditional/religious influence. This paper examined the social stratification in Nigeria with the backdrop of the introduction of paid employment and the impact of this unique social classification in the workplace. In examining social stratification in the workplace, four hundred and eighty respondents were interviewed using structured questionnaire in a onetime survey. Data collected indicates that seventy-nine percent of the surveyed group preferred to be classified with traditional or religious strata than academic class. Indicating that, royalty takes the front seat in the stratification of the Nigerian society even in the work place. This scenario may account for the emphasis Nigerians place on traditional and religious titles over academic titles in almost all sphere of life including the workplace. This calls for the strengthening of the traditional and religious institutions so that they can assist to impart core social values on members of the society, while giving proper honour to those who are accomplished professionals in their various fields of endeavours.

  11. Turbulence and pollutant transport in urban street canyons under stable stratification: a large-eddy simulation

    Science.gov (United States)

    Li, X.

    2014-12-01

    Thermal stratification of the atmospheric surface layer has strong impact on the land-atmosphere exchange of turbulent, heat, and pollutant fluxes. Few studies have been carried out for the interaction of the weakly to moderately stable stratified atmosphere and the urban canopy. This study performs a large-eddy simulation of a modeled street canyon within a weakly to moderately stable atmosphere boundary layer. To better resolve the smaller eddy size resulted from the stable stratification, a higher spatial and temporal resolution is used. The detailed flow structure and turbulence inside the street canyon are analyzed. The relationship of pollutant dispersion and Richardson number of the atmosphere is investigated. Differences between these characteristics and those under neutral and unstable atmosphere boundary layer are emphasized.

  12. Pre-treatment risk stratification of prostate cancer patients: A critical review.

    Science.gov (United States)

    Rodrigues, George; Warde, Padraig; Pickles, Tom; Crook, Juanita; Brundage, Michael; Souhami, Luis; Lukka, Himu

    2012-04-01

    The use of accepted prostate cancer risk stratification groups based on prostate-specific antigen, T stage and Gleason score assists in therapeutic treatment decision-making, clinical trial design and outcome reporting. The utility of integrating novel prognostic factors into an updated risk stratification schema is an area of current debate. The purpose of this work is to critically review the available literature on novel pre-treatment prognostic factors and alternative prostate cancer risk stratification schema to assess the feasibility and need for changes to existing risk stratification systems. A systematic literature search was conducted to identify original research publications and review articles on prognostic factors and risk stratification in prostate cancer. Search terms included risk stratification, risk assessment, prostate cancer or neoplasms, and prognostic factors. Abstracted information was assessed to draw conclusions regarding the potential utility of changes to existing risk stratification schema. The critical review identified three specific clinically relevant potential changes to the most commonly used three-group risk stratification system: (1) the creation of a very-low risk category; (2) the splitting of intermediate-risk into a low- and high-intermediate risk groups; and (3) the clarification of the interface between intermediate- and high-risk disease. Novel pathological factors regarding high-grade cancer, subtypes of Gleason score 7 and percentage biopsy cores positive were also identified as potentially important risk-stratification factors. Multiple studies of prognostic factors have been performed to create currently utilized prostate cancer risk stratification systems. We propose potential changes to existing systems.

  13. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  14. Investigations on stratification devices for hot water stores

    DEFF Research Database (Denmark)

    Andersen, Elsa; Furbo, Simon; Hampel, Matthias

    2008-01-01

    The significance of the thermal stratification for the energy efficiency of small solar-thermal hot water heat stores is pointed out. Exemplary the thermal stratification build-up with devices already marketed as well as with devices still in development has been investigated experimentally...

  15. Consideration of clinicopathologic features improves patient stratification for multimodal treatment of gastric cancer.

    Science.gov (United States)

    Cho, In; Kwon, In Gyu; Guner, Ali; Son, Taeil; Kim, Hyoung-Il; Kang, Dae Ryong; Noh, Sung Hoon; Lim, Joon Seok; Hyung, Woo Jin

    2017-10-03

    Preoperative staging of gastric cancer with computed tomography alone exhibits poor diagnostic accuracy, which may lead to improper treatment decisions. We developed novel patient stratification criteria to select appropriate treatments for gastric cancer patients based on preoperative staging and clinicopathologic features. A total of 5352 consecutive patients who underwent gastrectomy for gastric cancer were evaluated. Preoperative stages were determined according to depth of invasion and nodal involvement on computed tomography. Logistic regression analysis was used to identify clinicopathological factors associated with the likelihood of proper patient stratification. The diagnostic accuracies of computed tomography scans for depth of invasion and nodal involvement were 67.1% and 74.1%, respectively. Among clinicopathologic factors, differentiated tumor histology, tumors smaller than 5 cm, and gross appearance of early gastric cancer on endoscopy were shown to be related to a more advanced stage of disease on preoperative computed tomography imaging than actual pathological stage. Additional consideration of undifferentiated histology, tumors larger than 5 cm, and grossly advanced gastric cancer on endoscopy increased the probability of selecting appropriate treatment from 75.5% to 94.4%. The addition of histology, tumor size, and endoscopic findings to preoperative staging improves patient stratification for more appropriate treatment of gastric cancer.

  16. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  17. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  18. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  19. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  20. A new stratification of mourning dove call-count routes

    Science.gov (United States)

    Blankenship, L.H.; Humphrey, A.B.; MacDonald, D.

    1971-01-01

    The mourning dove (Zenaidura macroura) call-count survey is a nationwide audio-census of breeding mourning doves. Recent analyses of the call-count routes have utilized a stratification based upon physiographic regions of the United States. An analysis of 5 years of call-count data, based upon stratification using potential natural vegetation, has demonstrated that this uew stratification results in strata with greater homogeneity than the physiographic strata, provides lower error variance, and hence generates greatet precision in the analysis without an increase in call-count routes. Error variance was reduced approximately 30 percent for the contiguous United States. This indicates that future analysis based upon the new stratification will result in an increased ability to detect significant year-to-year changes.

  1. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  2. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  3. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  4. The role stratification on Indian ocean mixing under global warming

    Science.gov (United States)

    Praveen, V.; Valsala, V.; Ravindran, A. M.

    2017-12-01

    The impact of changes in Indian ocean stratification on mixing under global warming is examined. Previous studies on global warming and associated weakening of winds reported to increase the stratification of the world ocean leading to a reduction in mixing, increased acidity, reduced oxygen and there by a reduction in productivity. However this processes is not uniform and are also modulated by changes in wind pattern of the future. Our study evaluate the role of stratification and surface fluxes on mixing focusing northern Indian ocean. A dynamical downscaling study using Regional ocean Modelling system (ROMS) forced with stratification and surface fluxes from selected CMIP5 models are presented. Results from an extensive set of historical and Representative Concentration Pathways 8.5 (rcp8.5) scenario simulations are used to quantify the distinctive role of stratification on mixing.

  5. Patient characteristics and stratification in medical treatment studies for metastatic colorectal cancer: a proposal for standardization of patient characteristic reporting and stratification.

    Science.gov (United States)

    Sorbye, H; Köhne, C-H; Sargent, D J; Glimelius, B

    2007-10-01

    Prognostic factors have the potential to determine the survival of patients to a greater extent than current antineoplastic agents. Despite this knowledge, there is no consensus on, first, what patient characteristics to report and, second, what stratification factors to use in metastatic colorectal cancer trials. Seven leading oncology and medical journals were reviewed for phase II and III publications reporting on medical treatment of metastatic colorectal cancer patients during 2001-2005. One hundred and forty-three studies with 21 214 patients were identified. The reporting of patient characteristics and use of stratification was noted. Age, gender, performance status, metastases location, sites and adjuvant chemotherapy were often reported (99-63%). Laboratory values as alkaline phosphatase, lactate dehydrogenase and white blood cell count, repeatedly found to be of prognostic relevance, were rarely reported (5-9%). Stratification was used in all phase III trials; however, only study centre was used with any consistency. There is considerable inconsistency in the reporting of patient characteristics and use of stratification factors in metastatic colorectal cancer trials. We propose a standardization of patient characteristics reporting and stratification factors. A common set of characteristics and strata will aid in trial reporting, interpretation and future meta-analyses.

  6. Risk Stratification in Differentiated Thyroid Cancer: An Ongoing Process

    Directory of Open Access Journals (Sweden)

    Gal Omry-Orbach

    2016-01-01

    Full Text Available Thyroid cancer is an increasingly common malignancy, with a rapidly rising prevalence worldwide. The social and economic ramifications of the increase in thyroid cancer are multiple. Though mortality from thyroid cancer is low, and most patients will do well, the risk of recurrence is not insignificant, up to 30%. Therefore, it is important to accurately identify those patients who are more or less likely to be burdened by their disease over years and tailor their treatment plan accordingly. The goal of risk stratification is to do just that. The risk stratification process generally starts postoperatively with histopathologic staging, based on the AJCC/UICC staging system as well as others designed to predict mortality. These do not, however, accurately assess the risk of recurrence/persistence. Patients initially considered to be at high risk may ultimately do very well yet be burdened by frequent unnecessary monitoring. Conversely, patients initially thought to be low risk, may not respond to their initial treatment as expected and, if left unmonitored, may have higher morbidity. The concept of risk-adaptive management has been adopted, with an understanding that risk stratification for differentiated thyroid cancer is dynamic and ongoing. A multitude of variables not included in AJCC/UICC staging are used initially to classify patients as low, intermediate, or high risk for recurrence. Over the course of time, a response-to-therapy variable is incorporated, and patients essentially undergo continuous risk stratification. Additional tools such as biochemical markers, genetic mutations, and molecular markers have been added to this complex risk stratification process such that this is essentially a continuum of risk. In recent years, additional considerations have been discussed with a suggestion of pre-operative risk stratification based on certain clinical and/or biologic characteristics. With the increasing prevalence of thyroid cancer but

  7. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  8. The impact of social stratification on cultural consumption

    Directory of Open Access Journals (Sweden)

    Tomić Marta

    2016-01-01

    Full Text Available This paper examines theoretical perspectives, research approaches and research results about the relationship between social stratification and cultural consumption. Paper presents main representatives of three sociological discourses: those who believe that class divisions still exist and that thay had an influence on the social inequalities, especially in the domain of cultural consumption and tastes; authors and researchers who emphasize the impact of social stratification on the formation of cultural stratification, and the third group which consists of those who are advocates of cultural consumptions theories and individualization and cultural tastes which means that membership of a particular social class are not by any cultural influences.

  9. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  10. Experimental studies on the thermal stratification and its influence on BLEVEs

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wensheng; Gong, Yanwu; Gao, Ting; Gu, Anzhong; Lu, Xuesheng [Institute of Refrigeration and Cryogenics, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2010-10-15

    The thermal stratification of Liquefied Petroleum Gas (LPG) and its effect on the occurrence of the boiling liquid expanding vapor explosion (BLEVE) have been investigated experimentally. Stratifications in liquid and vapor occur when the LPG tank is heated. The degree of the liquid stratification {beta} increases with an increasing heat flux and decreasing filling ratio. The effect of stratification on the BLEVE has been examined with depressurization tests of LPG. The results show that the pressure recovery for the stratified LPG ({beta} = 1.4) upon sudden depressurization is much lower than that for the isothermal LPG ({beta} = 1). It can be concluded that the liquid stratification decreases the liquid energy and the occurrence of the BLEVE. (author)

  11. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  12. Cancer Stratification by Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Justus Weber

    2015-03-01

    Full Text Available The lack of specificity of traditional cytotoxic drugs has triggered the development of anticancer agents that selectively address specific molecular targets. An intrinsic property of these specialized drugs is their limited applicability for specific patient subgroups. Consequently, the generation of information about tumor characteristics is the key to exploit the potential of these drugs. Currently, cancer stratification relies on three approaches: Gene expression analysis and cancer proteomics, immunohistochemistry and molecular imaging. In order to enable the precise localization of functionally expressed targets, molecular imaging combines highly selective biomarkers and intense signal sources. Thus, cancer stratification and localization are performed simultaneously. Many cancer types are characterized by altered receptor expression, such as somatostatin receptors, folate receptors or Her2 (human epidermal growth factor receptor 2. Similar correlations are also known for a multitude of transporters, such as glucose transporters, amino acid transporters or hNIS (human sodium iodide symporter, as well as cell specific proteins, such as the prostate specific membrane antigen, integrins, and CD20. This review provides a comprehensive description of the methods, targets and agents used in molecular imaging, to outline their application for cancer stratification. Emphasis is placed on radiotracers which are used to identify altered expression patterns of cancer associated markers.

  13. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  14. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  15. Assessing the joint effect of population stratification and sample selection in studies of gene-gene (environment interactions

    Directory of Open Access Journals (Sweden)

    Cheng KF

    2012-01-01

    Full Text Available Abstract Background It is well known that the presence of population stratification (PS may cause the usual test in case-control studies to produce spurious gene-disease associations. However, the impact of the PS and sample selection (SS is less known. In this paper, we provide a systematic study of the joint effect of PS and SS under a more general risk model containing genetic and environmental factors. We provide simulation results to show the magnitude of the bias and its impact on type I error rate of the usual chi-square test under a wide range of PS level and selection bias. Results The biases to the estimation of main and interaction effect are quantified and then their bounds derived. The estimated bounds can be used to compute conservative p-values for the association test. If the conservative p-value is smaller than the significance level, we can safely claim that the association test is significant regardless of the presence of PS or not, or if there is any selection bias. We also identify conditions for the null bias. The bias depends on the allele frequencies, exposure rates, gene-environment odds ratios and disease risks across subpopulations and the sampling of the cases and controls. Conclusion Our results show that the bias cannot be ignored even the case and control data were matched in ethnicity. A real example is given to illustrate application of the conservative p-value. These results are useful to the genetic association studies of main and interaction effects.

  16. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  17. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  18. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  19. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  20. Awortwi et al.: Mixing and stratification relationship on phytoplankton ...

    African Journals Online (AJOL)

    Awortwi et al.: Mixing and stratification relationship on phytoplankton of Lake Bosomtwe (Ghana) 43 West African Journal of Applied Ecology, vol. 23(2), 2015: 43–62. The Relationship Between Mixing and Stratification Regime on the Phytoplankton of Lake Bo.

  1. Has climate change disrupted stratification patterns in Lake Victoria ...

    African Journals Online (AJOL)

    Has climate change disrupted stratification patterns in Lake Victoria, East Africa? ... Climate change may threaten the fisheries of Lake Victoria by increasing density differentials in the water column, thereby strengthening stratification and increasing the ... Keywords: deoxygenation, fisheries, global warming, thermocline

  2. Thermal stratification and fatigue stress analysis for pressurizer surge line

    International Nuclear Information System (INIS)

    Yu Xiaofei; Zhang Yixiong

    2011-01-01

    Thermal stratification of pressurizer surge line induced by the inside fluid results in the global bending moments, local thermal stresses, unexpected displacements and support loadings of the pipe system. In order to avoid a costly three-dimensional computation, a combined 1D/2D technique has been developed and implemented to analyze the thermal stratification and fatigue stress of pressurize surge line of QINSHAN Phase II Extension Nuclear Power Project in this paper, using the computer codes SYSTUS and ROCOCO. According to the mechanical analysis results of stratification, the maximum stress and cumulative usage factor are obtained. The results indicate that the stress and fatigue intensity considering thermal stratification satisfies RCC-M criterion. (authors)

  3. Stratification of living organisms in ballast tanks: how do organism concentrations vary as ballast water is discharged?

    Science.gov (United States)

    First, Matthew R; Robbins-Wamsley, Stephanie H; Riley, Scott C; Moser, Cameron S; Smith, George E; Tamburri, Mario N; Drake, Lisa A

    2013-05-07

    Vertical migrations of living organisms and settling of particle-attached organisms lead to uneven distributions of biota at different depths in the water column. In ballast tanks, heterogeneity could lead to different population estimates depending on the portion of the discharge sampled. For example, concentrations of organisms exceeding a discharge standard may not be detected if sampling occurs during periods of the discharge when concentrations are low. To determine the degree of stratification, water from ballast tanks was sampled at two experimental facilities as the tanks were drained after water was held for 1 or 5 days. Living organisms ≥50 μm were counted in discrete segments of the drain (e.g., the first 20 min of the drain operation, the second 20 min interval, etc.), thus representing different strata in the tank. In 1 and 5 day trials at both facilities, concentrations of organisms varied among drain segments, and the patterns of stratification varied among replicate trials. From numerical simulations, the optimal sampling strategy for stratified tanks is to collect multiple time-integrated samples spaced relatively evenly throughout the discharge event.

  4. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  5. European environmental stratifications and typologies

    DEFF Research Database (Denmark)

    Hazeu, G.W,; Metzger, M.J.; Mücher, C.A.

    2011-01-01

    their limitations and challenges. As such, they provide a sound basis for describing the factors affecting the robustness of such datasets. The latter is especially relevant, since there is likely to be further interest in European environmental assessment. In addition, advances in data availability and analysis......A range of new spatial datasets classifying the European environment has been constructed over the last few years. These datasets share the common objective of dividing European environmental gradients into convenient units, within which objects and variables of interest have relatively homogeneous...... scale. This paper provides an overview of five recent European stratifications and typologies, constructed for contrasting objectives, and differing in spatial and thematic detail. These datasets are: the Environmental Stratification (EnS), the European Landscape Classification (LANMAP), the Spatial...

  6. Experiments and MPS analysis of stratification behavior of two immiscible fluids

    Energy Technology Data Exchange (ETDEWEB)

    Li, Gen, E-mail: ligen@fuji.waseda.jp [Cooperative Major in Nuclear Energy, Waseda University, 3-4-1, Okubo, Shinjuku-ku, Tokyo 169-8555 (Japan); Oka, Yoshiaki [Cooperative Major in Nuclear Energy, Waseda University, 3-4-1, Okubo, Shinjuku-ku, Tokyo 169-8555 (Japan); Furuya, Masahiro; Kondo, Masahiro [Nuclear Technology Research Laboratory, Central Research Institute of Electric Power Industry (CRIEPI), 2-11-1 Iwado-kita, Komae, Tokyo 201-8511 (Japan)

    2013-12-15

    Highlights: • Improving numerical stability of MPS method. • Implicitly calculating viscous term in momentum equation for highly viscous fluids. • Validation of the enhanced MPS method by analyzing dam break problem. • Various stratification behavior analysis by experiments and simulations. • Sensitivity analysis of the effects of the fluid viscosity and density difference. - Abstract: Stratification behavior is of great significance in the late in-vessel stage of core melt severe accident of a nuclear reactor. Conventional numerical methods have difficulties in analyzing stratification process accompanying with free surface without depending on empirical correlations. The Moving Particle Semi-implicit (MPS) method, which calculates free surface and multiphase flow without empirical equations, is applicable for analyzing the stratification behavior of fluids. In the present study, the original MPS method was improved to simulate the stratification behavior of two immiscible fluids. The improved MPS method was validated through simulating classical dam break problem. Then, the stratification processes of two fluid columns and injected fluid were investigated through experiments and simulations, using silicone oil and salt water as the simulant materials. The effects of fluid viscosity and density difference on stratification behavior were also sensitively investigated by simulations. Typical fluid configurations at various parametric and geometrical conditions were observed and well predicted by improved MPS method.

  7. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  8. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  9. Characteristic and Preferences of Green Consumer Stratification As Bases to Formulating Marketing Strategies of Ecolabel-Certified Furniture

    Directory of Open Access Journals (Sweden)

    Ririn Wulandari

    2012-06-01

    question. The components used were: satisfaction, safety, socialization, and sustainability, as well as government policies which could open markets. The respondents were 408 potential consumers in Jakarta and its surroundings. The method used was purposive and convenience sampling, in which the survey was conducted at exhibitions and showrooms. Ward Method, Stepwise Discriminant Analysis and Biplot Analysis were used to generate consumer stratifications. Before that, reliability tests were conducted using Crombach Alpha method. In addition, data was explored and reduced using Component Principle Analysis. Preference analysis was performed using the method proposed by Thurston Case V. This study results four stratifications of green consumers. There were similarities and differences in preference on each of the stratification of the component of green marketing strategy as well as the marketing strategies of furniture-certified ecolabel for the targeted consumers.

  10. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. Fuel and combustion stratification study of Partially Premixed Combustion

    NARCIS (Netherlands)

    Izadi Najafabadi, M.; Dam, N.; Somers, B.; Johansson, B.

    2016-01-01

    Relatively high levels of stratification is one of the main advantages of Partially Premixed Combustion (PPC) over the Homogeneous Charge Compression Ignition (HCCI) concept. Fuel stratification smoothens heat release and improves controllability of this kind of combustion. However, the lack of a

  12. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Relationship between thermal stratification and flow patterns in steam-quenching suppression pool

    International Nuclear Information System (INIS)

    Song, Daehun; Erkan, Nejdet; Jo, Byeongnam; Okamoto, Koji

    2015-01-01

    Highlights: • Thermal stratification mechanism by direct contact condensation is investigated. • Thermal stratification condition changes according to the flow pattern. • Thermal stratification depends on the force balance between buoyancy and momentum. • Flow pattern change was observed even in the same regime. • Flow pattern is affected by the sensitive force balance. - Abstract: This study aims to examine the relationship between thermal stratification and flow patterns in a steam-quenching suppression pool using particle image velocimetry. Thermal stratification was experimentally evaluated in a depressurized water pool under different steam mass flux conditions. The time evolution of the temperature profile of the suppression pool was presented with the variation of condensation regimes, and steam condensation processes were visualized using a high-speed camera. The thermal stratification condition was classified into full mixing, gradual thermal stratification, and developed thermal stratification. It was found that the condition was determined by the flow patterns depending on the force balance between buoyancy and momentum. The force balance affected both the condensation regime and the flow pattern, and hence, the flow pattern was changed with the condensation regime. However, the force balance had a sensitive influence on the flow in the pool; therefore, distinct flow patterns were observed even in the same condensation regime.

  14. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  15. Polar ocean stratification in a cold climate.

    Science.gov (United States)

    Sigman, Daniel M; Jaccard, Samuel L; Haug, Gerald H

    2004-03-04

    The low-latitude ocean is strongly stratified by the warmth of its surface water. As a result, the great volume of the deep ocean has easiest access to the atmosphere through the polar surface ocean. In the modern polar ocean during the winter, the vertical distribution of temperature promotes overturning, with colder water over warmer, while the salinity distribution typically promotes stratification, with fresher water over saltier. However, the sensitivity of seawater density to temperature is reduced as temperature approaches the freezing point, with potential consequences for global ocean circulation under cold climates. Here we present deep-sea records of biogenic opal accumulation and sedimentary nitrogen isotopic composition from the Subarctic North Pacific Ocean and the Southern Ocean. These records indicate that vertical stratification increased in both northern and southern high latitudes 2.7 million years ago, when Northern Hemisphere glaciation intensified in association with global cooling during the late Pliocene epoch. We propose that the cooling caused this increased stratification by weakening the role of temperature in polar ocean density structure so as to reduce its opposition to the stratifying effect of the vertical salinity distribution. The shift towards stratification in the polar ocean 2.7 million years ago may have increased the quantity of carbon dioxide trapped in the abyss, amplifying the global cooling.

  16. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  17. Breakup of last glacial deep stratification in the South Pacific

    Science.gov (United States)

    Basak, Chandranath; Fröllje, Henning; Lamy, Frank; Gersonde, Rainer; Benz, Verena; Anderson, Robert F.; Molina-Kescher, Mario; Pahnke, Katharina

    2018-02-01

    Stratification of the deep Southern Ocean during the Last Glacial Maximum is thought to have facilitated carbon storage and subsequent release during the deglaciation as stratification broke down, contributing to atmospheric CO2 rise. Here, we present neodymium isotope evidence from deep to abyssal waters in the South Pacific that confirms stratification of the deepwater column during the Last Glacial Maximum. The results indicate a glacial northward expansion of Ross Sea Bottom Water and a Southern Hemisphere climate trigger for the deglacial breakup of deep stratification. It highlights the important role of abyssal waters in sustaining a deep glacial carbon reservoir and Southern Hemisphere climate change as a prerequisite for the destabilization of the water column and hence the deglacial release of sequestered CO2 through upwelling.

  18. GOTHIC code simulation of thermal stratification in POOLEX facility

    International Nuclear Information System (INIS)

    Li, H.; Kudinov, P.

    2009-07-01

    Pressure suppression pool is an important element of BWR containment. It serves as a heat sink and steam condenser to prevent containment pressure buildup during loss of coolant accident or safety relief valve opening during normal operations of a BWR. Insufficient mixing in the pool, in case of low mass flow rate of steam, can cause development of thermal stratification and reduction of pressure suppression pool capacity. For reliable prediction of mixing and stratification phenomena validation of simulation tools has to be performed. Data produced in POOLEX/PPOOLEX facility at Lappeenranta University of Technology about development of thermal stratification in a large scale model of a pressure suppression pool is used for GOTHIC lumped and distributed parameter validation. Sensitivity of GOTHIC solution to different boundary conditions and grid convergence study for 2D simulations of POOLEX STB-20 experiment are performed in the present study. CFD simulation was carried out with FLUENT code in order to get additional insights into physics of stratification phenomena. In order to support development of experimental procedures for new tests in the PPOOLEX facility lumped parameter pre-test GOTHIC simulations were performed. Simulations show that drywell and wetwell pressures can be kept within safety margins during a long transient necessary for development of thermal stratification. (au)

  19. GOTHIC code simulation of thermal stratification in POOLEX facility

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Kudinov, P. (Royal Institute of Technology (KTH) (Sweden))

    2009-07-15

    Pressure suppression pool is an important element of BWR containment. It serves as a heat sink and steam condenser to prevent containment pressure buildup during loss of coolant accident or safety relief valve opening during normal operations of a BWR. Insufficient mixing in the pool, in case of low mass flow rate of steam, can cause development of thermal stratification and reduction of pressure suppression pool capacity. For reliable prediction of mixing and stratification phenomena validation of simulation tools has to be performed. Data produced in POOLEX/PPOOLEX facility at Lappeenranta University of Technology about development of thermal stratification in a large scale model of a pressure suppression pool is used for GOTHIC lumped and distributed parameter validation. Sensitivity of GOTHIC solution to different boundary conditions and grid convergence study for 2D simulations of POOLEX STB-20 experiment are performed in the present study. CFD simulation was carried out with FLUENT code in order to get additional insights into physics of stratification phenomena. In order to support development of experimental procedures for new tests in the PPOOLEX facility lumped parameter pre-test GOTHIC simulations were performed. Simulations show that drywell and wetwell pressures can be kept within safety margins during a long transient necessary for development of thermal stratification. (au)

  20. Stratification-Based Outlier Detection over the Deep Web.

    Science.gov (United States)

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.

  1. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  2. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  3. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  4. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  5. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  6. Effect of layout on surge line thermal stratification

    International Nuclear Information System (INIS)

    Lai Jianyong; Huang Wei

    2011-01-01

    In order to analyze and evaluate the effect of layout on the thermal stratification for PWR Pressurizer surge line, numerical simulation by Computational Fluid Dynamics (CFD) method is taken on 6 kinds of layout improvement with 2 improvement schemes, i.e., increasing the obliquity of quasi horizontal section and adding a vertical pipe between the quasi horizontal section and next elbow, and the maximum temperature differences of quasi horizontal section of surge line of various layouts under different flowrate are obtained. The comparison shows that, the increasing of the obliquity of quasi horizontal section can mitigate the thermal stratification phenomena but can not eliminate this phenomena, while the adding of a vertical pipe between the quasi horizontal section and next elbow can effectively mitigate and eliminate the thermal stratification phenomena. (authors)

  7. Horizontal Stratification in Access to Danish University Programmes

    DEFF Research Database (Denmark)

    Munk, Martin D.; Thomsen, Jens Peter

    2018-01-01

    a relatively detailed classification of parents’ occupations to determine how students are endowed with different forms of capital, even when their parents would typically be characterised as belonging to the same social group. Second, we distinguish among disciplines and among university institutions...... to explain the dynamics of horizontal stratification in the Danish university system. Using unique and exhaustive register data, including all higher education institutions and the entire 1984 cohort as of the age of 24, we uncover distinct differences in the magnitude and type of horizontal stratification...... in different fields of study and university institutions. Most importantly, we find distinct patterns of horizontal stratification by field of study and parental occupation that would have remained hidden had we used more aggregated classifications for field of study and social origin....

  8. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  9. Stratification in SNR-300 outlet plenum

    International Nuclear Information System (INIS)

    Reinders, R.

    1983-01-01

    In the inner outlet plenum of the SNR-300 under steady state conditions a large toroidal vortex is expected. The main flow passes through the gap between dipplate and shield vessel to the outer annular space. Only 3% of the flow pass the 24 emergency cooling holes, situated in the shield vessel. The sodium leaves the reactor tank through the 3 symmetrically arranged outlet nozzles. For a scram flow rates and temperatures are decreased simultaneously, so it is expected, that stratification occurs in the inner outlet plenum. A measure of stratification effects is the Archimedes Number Ar, which is the relation of buoyancy forces (negative) to kinetic energy. (The Archimedes Number is nearly identical with the Richardson Number). For values Ar>1 stratification can occur. Under the assumption of stratification the code TIRE was developed, which is only applicable for the period of time after some 50 sec after scram. This code serves for long term calculations. As the equations are very simple, it is a very fast code which gives the possibility to calculate transients for some hours real time. This code mainly has to take into account the pressure difference between inner plenum and outlet annulus caused by geodatic pressure. That force is in equilibrium with the pressure drop over the gap and holes in the shield vessel. For more detailed calculations of flow pattern and temperature distribution the code MIX and INKO 2T are applied. MIX was developed and validated at ANL, INKO 2T is a development of INTERATOM. INKO 2T is under validation. Mock up experiments were carried out with water to simulate the transient behavior of the SNR-300 outlet plenum. Calculations obtained by INKO 2T for steady state and the transient are shown for the flow pattern. Results of measurements also prove that stratification begins after about 30 sec. Measurements and detailed calculations show that it is admissible to use the code TIRE for the long term calculations. Calculations for a scram

  10. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  11. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  12. Vertical stratification of bat assemblages in flooded and unflooded Amazonian forests

    Directory of Open Access Journals (Sweden)

    Maria João Ramos PEREIRA, João Tiago MARQUES, Jorge M. PALMEIRIM

    2010-08-01

    Full Text Available Tropical rainforests usually have multiple strata that results in a vertical stratification of ecological opportunities for animals. We investigated if this stratification influences the way bats use the vertical space in flooded and unflooded forests of the Central Amazon. Using mist-nets set in the canopy (17 to 35 m high and in the understorey (0 to 3 m high we sampled four sites in upland unflooded forests (terra firme, three in forests seasonally flooded by nutrient-rich water (várzea, and three in forests seasonally flooded by nutrient-poor water (igapó. Using rarefaction curves we found that species richness in the understorey and canopy were very similar. An ordination analysis clearly separated the bat assemblages of the canopy from those of the understorey in both flooded and unflooded habitats. Gleaning carnivores were clearly associated with the understorey, whereas frugivores were abundant in both strata. Of the frugivores, Carollinae and some Stenodermatinae were understorey specialists, but several Stenodermatinae mostly used the canopy. The first group mainly includes species that, in general, feed on fruits of understorey shrubs, whereas the second group feed on figs and other canopy fruits. We conclude that vertical stratification in bat communities occurs even within forests with lower canopy heights, such as Amazonian seasonally flooded forests, and that the vertical distribution of bat species is closely related to their diet and foraging behaviour [Current Zoology 56 (4: 469–478, 2010].

  13. A probabilistic topic model for clinical risk stratification from electronic health records.

    Science.gov (United States)

    Huang, Zhengxing; Dong, Wei; Duan, Huilong

    2015-12-01

    Risk stratification aims to provide physicians with the accurate assessment of a patient's clinical risk such that an individualized prevention or management strategy can be developed and delivered. Existing risk stratification techniques mainly focus on predicting the overall risk of an individual patient in a supervised manner, and, at the cohort level, often offer little insight beyond a flat score-based segmentation from the labeled clinical dataset. To this end, in this paper, we propose a new approach for risk stratification by exploring a large volume of electronic health records (EHRs) in an unsupervised fashion. Along this line, this paper proposes a novel probabilistic topic modeling framework called probabilistic risk stratification model (PRSM) based on Latent Dirichlet Allocation (LDA). The proposed PRSM recognizes a patient clinical state as a probabilistic combination of latent sub-profiles, and generates sub-profile-specific risk tiers of patients from their EHRs in a fully unsupervised fashion. The achieved stratification results can be easily recognized as high-, medium- and low-risk, respectively. In addition, we present an extension of PRSM, called weakly supervised PRSM (WS-PRSM) by incorporating minimum prior information into the model, in order to improve the risk stratification accuracy, and to make our models highly portable to risk stratification tasks of various diseases. We verify the effectiveness of the proposed approach on a clinical dataset containing 3463 coronary heart disease (CHD) patient instances. Both PRSM and WS-PRSM were compared with two established supervised risk stratification algorithms, i.e., logistic regression and support vector machine, and showed the effectiveness of our models in risk stratification of CHD in terms of the Area Under the receiver operating characteristic Curve (AUC) analysis. As well, in comparison with PRSM, WS-PRSM has over 2% performance gain, on the experimental dataset, demonstrating that

  14. Principal stratification in causal inference.

    Science.gov (United States)

    Frangakis, Constantine E; Rubin, Donald B

    2002-03-01

    Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.

  15. Stratification for smoking in case-cohort studies of genetic polymorphisms and lung cancer

    DEFF Research Database (Denmark)

    Sørensen, Mette; López, Ana García; Andersen, Per Kragh

    2009-01-01

    and adjustment for smoking on the estimated effect of polymorphisms on lung cancer risk was explored in the case-cohort design. We used an empirical and a statistical simulation approach. The stratification strategies were: no smoking stratification, stratification for smoking status and stratification......The risk estimates obtained in studies of genetic polymorphisms and lung cancer differ markedly between studies, which might be due to chance or differences in study design, in particular the stratification/match of comparison group. The effect of different strategies for stratification...... for smoking duration. The study base was a prospective follow-up study with 57,053 participants. In the simulation approach the glutathione S-transferase T1 null polymorphism, as a model of any polymorphism, was added to simulated data in two different ways, assuming either absence or presence of association...

  16. Vertical Stratification of Soil Phosphorus as a Concern for Dissolved Phosphorus Runoff in the Lake Erie Basin.

    Science.gov (United States)

    Baker, David B; Johnson, Laura T; Confesor, Remegio B; Crumrine, John P

    2017-11-01

    During the re-eutrophication of Lake Erie, dissolved reactive phosphorus (DRP) loading and concentrations to the lake have nearly doubled, while particulate phosphorus (PP) has remained relatively constant. One potential cause of increased DRP concentrations is P stratification, or the buildup of soil-test P (STP) in the upper soil layer (soil samples (0-5 or 0-2.5 cm) alongside their normal agronomic samples (0-20 cm) ( = 1758 fields). The mean STP level in the upper 2.5 cm was 55% higher than the mean of agronomic samples used for fertilizer recommendations. The amounts of stratification were highly variable and did not correlate with agronomic STPs (Spearman's = 0.039, = 0.178). Agronomic STP in 70% of the fields was within the buildup or maintenance ranges for corn ( L.) and soybeans [ (L.) Merr.] (0-46 mg kg Mehlich-3 P). The cumulative risks for DRP runoff from the large number of fields in the buildup and maintenance ranges exceeded the risks from fields above those ranges. Reducing stratification by a one-time soil inversion has the potential for larger and quicker reductions in DRP runoff risk than practices related to drawing down agronomic STP levels. Periodic soil inversion and mixing, targeted by stratified STP data, should be considered a viable practice to reduce DRP loading to Lake Erie. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  17. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  18. Statistical issues in reporting quality data: small samples and casemix variation.

    Science.gov (United States)

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  19. Predictions of stratification in cold leg components using virtual noding schemes

    International Nuclear Information System (INIS)

    Piper, R.B.; Hassan, Y.A.; Banerjee, S.S.; Barsamian, H.R.; Cebull, P.P.

    1996-01-01

    In this investigation, a virtual noding scheme is used with RELAP5/MOD3.2 to capture thermal stratification effects in a small-break loss-of-coolant accident (LOCA) simulation. A three-dimensional code (CFD-ACE) has also been used to observe the stratification effects in a similar situation. Stratification temperature differences of the simulations compare well with that of the experiment. The Froude number was also evaluated

  20. Combustion Stratification for Naphtha from CI Combustion to PPC

    KAUST Repository

    Vallinayagam, R.

    2017-03-28

    This study demonstrates the combustion stratification from conventional compression ignition (CI) combustion to partially premixed combustion (PPC). Experiments are performed in an optical CI engine at a speed of 1200 rpm for diesel and naphtha (RON = 46). The motored pressure at TDC is maintained at 35 bar and fuelMEP is kept constant at 5.1 bar to account for the difference in fuel properties between naphtha and diesel. Single injection strategy is employed and the fuel is injected at a pressure of 800 bar. Photron FASTCAM SA4 that captures in-cylinder combustion at the rate of 10000 frames per second is employed. The captured high speed video is processed to study the combustion homogeneity based on an algorithm reported in previous studies. Starting from late fuel injection timings, combustion stratification is investigated by advancing the fuel injection timings. For late start of injection (SOI), a direct link between SOI and combustion phasing is noticed. At early SOI, combustion phasing depends on both intake air temperature and SOI. In order to match the combustion phasing (CA50) of diesel, the intake air temperature is increased to 90°C for naphtha. The combustion stratification from CI to PPC is also investigated for various level of dilution by displacing oxygen with nitrogen in the intake. The start of combustion (SOC) was delayed with the increase in dilution and to compensate for this, the intake air temperature is increased. The mixture homogeneity is enhanced for higher dilution due to longer ignition delay. The results show that high speed image is initially blue and then turned yellow, indicating soot formation and oxidation. The luminosity of combustion images decreases with early SOI and increased dilution. The images are processed to generate the level of stratification based on the image intensity. The level of stratification is same for diesel and naphtha at various SOI. When O concentration in the intake is decreased to 17.7% and 14

  1. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  2. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  3. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  4. Temperature Stratification in a Cryogenic Fuel Tank

    Science.gov (United States)

    Daigle, Matthew John; Smelyanskiy, Vadim; Boschee, Jacob; Foygel, Michael Gregory

    2013-01-01

    A reduced dynamical model describing temperature stratification effects driven by natural convection in a liquid hydrogen cryogenic fuel tank has been developed. It accounts for cryogenic propellant loading, storage, and unloading in the conditions of normal, increased, and micro- gravity. The model involves multiple horizontal control volumes in both liquid and ullage spaces. Temperature and velocity boundary layers at the tank walls are taken into account by using correlation relations. Heat exchange involving the tank wall is considered by means of the lumped-parameter method. By employing basic conservation laws, the model takes into consideration the major multi-phase mass and energy exchange processes involved, such as condensation-evaporation of the hydrogen, as well as flows of hydrogen liquid and vapor in the presence of pressurizing helium gas. The model involves a liquid hydrogen feed line and a tank ullage vent valve for pressure control. The temperature stratification effects are investigated, including in the presence of vent valve oscillations. A simulation of temperature stratification effects in a generic cryogenic tank has been implemented in Matlab and results are presented for various tank conditions.

  5. Drainage and Stratification Kinetics of Foam Films

    Science.gov (United States)

    Zhang, Yiran; Sharma, Vivek

    2014-03-01

    Baking bread, brewing cappuccino, pouring beer, washing dishes, shaving, shampooing, whipping eggs and blowing bubbles all involve creation of aqueous foam films. Foam lifetime, drainage kinetics and stability are strongly influenced by surfactant type (ionic vs non-ionic), and added proteins, particles or polymers modify typical responses. The rate at which fluid drains out from a foam film, i.e. drainage kinetics, is determined in the last stages primarily by molecular interactions and capillarity. Interestingly, for certain low molecular weight surfactants, colloids and polyelectrolyte-surfactant mixtures, a layered ordering of molecules, micelles or particles inside the foam films leads to a stepwise thinning phenomena called stratification. Though stratification is observed in many confined systems including foam films containing particles or polyelectrolytes, films containing globular proteins seem not to show this behavior. Using a Scheludko-type cell, we experimentally study the drainage and stratification kinetics of horizontal foam films formed by protein-surfactant mixtures, and carefully determine how the presence of proteins influences the hydrodynamics and thermodynamics of foam films.

  6. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  7. Germination and development of pecan cultivar seedlings by seed stratification

    Directory of Open Access Journals (Sweden)

    Igor Poletto

    2015-12-01

    Full Text Available Abstract: The objective of this work was to evaluate the effect of seed stratification on germination rate, germination speed, and initial development of seedlings of six pecan (Carya illinoinensis cultivars under subtropical climatic conditions in southern Brazil. For stratification, the seeds were placed in boxes with moist sand, in a cold chamber at 4°C, for 90 days. In the fourteenth week after sowing, the emergence speed index, total emergence, plant height, stem diameter, and number of leaves were evaluated. Seed stratification significantly improves the germination potential and morphological traits of the evaluated cultivars.

  8. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  9. Eddy-driven stratification initiates North Atlantic spring phytoplankton blooms.

    Science.gov (United States)

    Mahadevan, Amala; D'Asaro, Eric; Lee, Craig; Perry, Mary Jane

    2012-07-06

    Springtime phytoplankton blooms photosynthetically fix carbon and export it from the surface ocean at globally important rates. These blooms are triggered by increased light exposure of the phytoplankton due to both seasonal light increase and the development of a near-surface vertical density gradient (stratification) that inhibits vertical mixing of the phytoplankton. Classically and in current climate models, that stratification is ascribed to a springtime warming of the sea surface. Here, using observations from the subpolar North Atlantic and a three-dimensional biophysical model, we show that the initial stratification and resulting bloom are instead caused by eddy-driven slumping of the basin-scale north-south density gradient, resulting in a patchy bloom beginning 20 to 30 days earlier than would occur by warming.

  10. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  11. Size stratification in a Gilbert delta due to a varying base level: flume experiments.

    Science.gov (United States)

    Chavarrias, Victor; Orru, Clara; Viparelli, Enrica; Vide, Juan Pedro Martin; Blom, Astrid

    2014-05-01

    mobile armor that covered the fluvial reach. This led to an initial coarsening of the brinkpoint load (and foreset deposit). Once the mobile armour was eroded, base level fall led to degradation of the finer substrate, which resulted in a fining of the brinkpoint load and foreset deposit. The relation between the sediment size stratification and the base level change may be used for the reconstruction of the paleo sea level from the stratigraphy of ancient Gilbert deltas.

  12. Near-Surface Effects of Free Atmosphere Stratification in Free Convection

    NARCIS (Netherlands)

    Mellado, Juan Pedro; Heerwaarden, van C.C.; Garcia, Jade Rachele

    2016-01-01

    The effect of a linear stratification in the free atmosphere on near-surface properties in a free convective boundary layer (CBL) is investigated by means of direct numerical simulation. We consider two regimes: a neutral stratification regime, which represents a CBL that grows into a residual

  13. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  14. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  15. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  16. Asymptomatic internal carotid artery stenosis and cerebrovascular risk stratification

    DEFF Research Database (Denmark)

    Nicolaides, Andrew N; Kakkos, Stavros K; Kyriacou, Efthyvoulos

    2010-01-01

    The purpose of this study was to determine the cerebrovascular risk stratification potential of baseline degree of stenosis, clinical features, and ultrasonic plaque characteristics in patients with asymptomatic internal carotid artery (ICA) stenosis.......The purpose of this study was to determine the cerebrovascular risk stratification potential of baseline degree of stenosis, clinical features, and ultrasonic plaque characteristics in patients with asymptomatic internal carotid artery (ICA) stenosis....

  17. The Effect of Barotropic and Baroclinic Tides on Coastal Stratification and Mixing

    Science.gov (United States)

    Suanda, S. H.; Feddersen, F.; Kumar, N.

    2017-12-01

    The effects of barotropic and baroclinic tides on subtidal stratification and vertical mixing are examined with high-resolution, three-dimensional numerical simulations of the Central Californian coastal upwelling region. A base simulation with realistic atmospheric and regional-scale boundary forcing but no tides (NT) is compared to two simulations with the addition of predominantly barotropic local tides (LT) and with combined barotropic and remotely generated, baroclinic tides (WT) with ≈ 100 W m-1 onshore baroclinic energy flux. During a 10 day period of coastal upwelling when the domain volume-averaged temperature is similar in all three simulations, LT has little difference in subtidal temperature and stratification compared to NT. In contrast, the addition of remote baroclinic tides (WT) reduces the subtidal continental shelf stratification up to 50% relative to NT. Idealized simulations to isolate barotropic and baroclinic effects demonstrate that within a parameter space of typical U.S. West Coast continental shelf slopes, barotropic tidal currents, incident energy flux, and subtidal stratification, the dissipating baroclinic tide destroys stratification an order of magnitude faster than barotropic tides. In WT, the modeled vertical temperature diffusivity at the top (base) of the bottom (surface) boundary layer is increased up to 20 times relative to NT. Therefore, the width of the inner-shelf (region of surface and bottom boundary layer overlap) is increased approximately 4 times relative to NT. The change in stratification due to dissipating baroclinic tides is comparable to the magnitude of the observed seasonal cycle of stratification.

  18. PPOOLEX experiments on thermal stratification and mixing

    Energy Technology Data Exchange (ETDEWEB)

    Puustinen, M.; Laine, J.; Raesaenen, A. (Lappeenranta Univ. of Technology, Nuclear Safety Research Unit (Finland))

    2009-08-15

    The results of the thermal stratification experiments in 2008 with the PPOOLEX test facility are presented. PPOOLEX is a closed vessel divided into two compartments, dry well and wet well. Extra temperature measurements for capturing different aspects of the investigated phenomena were added before the experiments. The main purpose of the experiment series was to generate verification data for evaluating the capability of GOTHIC code to predict stratification and mixing phenomena. Altogether six experiments were carried out. Heat-up periods of several thousand seconds by steam injection into the dry well compartment and from there into the wet well water pool were recorded. The initial water bulk temperature was 20 deg. C. Cooling periods of several days were included in three experiments. A large difference between the pool bottom and top layer temperature was measured when small steam flow rates were used. With higher flow rates the mixing effect of steam discharge delayed the start of stratification until the pool bulk temperature exceeded 50 deg. C. The stratification process was also different in these two cases. With a small flow rate stratification was observed only above and just below the blowdown pipe outlet elevation. With a higher flow rate over a 30 deg. C temperature difference between the pool bottom and pipe outlet elevation was measured. Elevations above the pipe outlet indicated almost linear rise until the end of steam discharge. During the cooling periods the measurements of the bottom third of the pool first had an increasing trend although there was no heat input from outside. This was due to thermal diffusion downwards from the higher elevations. Heat-up in the gas space of the wet well was quite strong, first due to compression by pressure build-up and then by heat conduction from the hot dry well compartment via the intermediate floor and test vessel walls and by convection from the upper layers of the hot pool water. The gas space

  19. PPOOLEX experiments on thermal stratification and mixing

    International Nuclear Information System (INIS)

    Puustinen, M.; Laine, J.; Raesaenen, A.

    2009-08-01

    The results of the thermal stratification experiments in 2008 with the PPOOLEX test facility are presented. PPOOLEX is a closed vessel divided into two compartments, dry well and wet well. Extra temperature measurements for capturing different aspects of the investigated phenomena were added before the experiments. The main purpose of the experiment series was to generate verification data for evaluating the capability of GOTHIC code to predict stratification and mixing phenomena. Altogether six experiments were carried out. Heat-up periods of several thousand seconds by steam injection into the dry well compartment and from there into the wet well water pool were recorded. The initial water bulk temperature was 20 deg. C. Cooling periods of several days were included in three experiments. A large difference between the pool bottom and top layer temperature was measured when small steam flow rates were used. With higher flow rates the mixing effect of steam discharge delayed the start of stratification until the pool bulk temperature exceeded 50 deg. C. The stratification process was also different in these two cases. With a small flow rate stratification was observed only above and just below the blowdown pipe outlet elevation. With a higher flow rate over a 30 deg. C temperature difference between the pool bottom and pipe outlet elevation was measured. Elevations above the pipe outlet indicated almost linear rise until the end of steam discharge. During the cooling periods the measurements of the bottom third of the pool first had an increasing trend although there was no heat input from outside. This was due to thermal diffusion downwards from the higher elevations. Heat-up in the gas space of the wet well was quite strong, first due to compression by pressure build-up and then by heat conduction from the hot dry well compartment via the intermediate floor and test vessel walls and by convection from the upper layers of the hot pool water. The gas space

  20. Temperature stratification in a hot water tank with circulation pipe

    DEFF Research Database (Denmark)

    Andersen, Elsa

    1998-01-01

    The aim of the project is to investigate the change in temperature stratification due to the operation of a circulation pipe. Further, putting forward rules for design of pipe inlet in order not to disturb the temperature stratification in the hot water tank. A validated computer model based on t...

  1. An analysis of system pressure and temperature distribution in self-pressurizer of SMART considering thermal stratification at intermediate cavity

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Yeon Moon; Lee, Doo Jeong; Yoon, Ju Hyun; Kim, Hwan Yeol [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-03-01

    Because the pressurizer is in reactor vessel, the heat transfer from primary water would increase the temperatures of fluids in pressurizer to same temperature of hotleg, if no cooling equipment were supplied. Thus, heat exchanger and thermal insulator are needed to minimize heat transferred from primary water and to remove heat in pressurizer. The temperatures in cavities of pressurizer for normal operation are 70 deg C and 74 deg C for intermediate and end cavity, respectively, which considers the solubility of nitrogen gas in water. Natural convection is the mechanism of heat balance in pressurizer of SMART. In SMART, the heat exchanger in pressurizer is placed in lower part of intermediate cavity, so the heat in upper part of intermediate cavity can't be removed adequately and it can cause thermal stratification. If thermal stratification occurred, it increases heat transfers to nitrogen gas and system pressure increases as the result. Thus, proper evaluation of those effects on system pressure and ways to mitigate thermal stratification should be established. This report estimates the system pressure and temperatures in cavities of pressurizer with considering thermal stratification in intermediate cavity. The system pressure and temperatures for each cavities considered size of wet thermal insulator, temperature of upper plate of reactor vessel, parameters of heat exchanger in intermediate cavity such as flow rate and temperature of cooling water, heat transfer area, effective tube height, and location of cooling tube. In addition to the consideration of thermal stratification thermal mixing of all water in intermediate cavity also considered and compared in this report. (author). 6 refs., 60 figs., 2 tabs.

  2. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  3. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  4. Experimental study on the thermal stratification in the branch of NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Nyung; Hwang, Seong Hong [Kyunghee Univ., Seoul (Korea, Republic of)

    2004-02-15

    As more experience is accumulated in the operation of existing nuclear power plants, the long term effects of thermal-hydraulic phenomena, unaccounted in the original designs, have been observed. One such phenomenon is thermal stratification, which has caused through-wall cracks, thermal fatigue, unexpected piping displacements and pipe support damage. Thermal stratification is a phenomenon as temperature layers are formed in the component or piping due to the density difference between hot and cold water. The thermal stratification phenomena in nuclear power plant observed in the pressurizer surge line, and in the piping of feedwater system, Safety Injection System(SIS), residual heat removal system (or shutdown cooling system), and chemical and volume control system during the design transients. A set of experiment has been performed to predict the temperature distribution in the branch piping of nuclear power plant(Ulchin unit 3 and 4) due to the turbulent penetration, the heat transfer through valve disk and valve leakage. The test facility scaled down to 1/10 has been designed and constructed to simulate the thermal stratification in the piping of safety injection system and shutdown cooling system of Ulchin 3 and 4. The experimental results show that the turbulent penetration depth could be : extended to the end of the vertical pipe, and thermal stratification due to the heat transfer through the valve disk to the end of horizontal pipe behind the valve disk. Finally, thermal stratification could effected by the location of valve leakage.

  5. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  7. [Sports medical aspects in cardiac risk stratification--heart rate variability and exercise capacity].

    Science.gov (United States)

    Banzer, W; Lucki, K; Bürklein, M; Rosenhagen, A; Vogt, L

    2006-12-01

    The present study investigates the association of the predicted CHD-risk (PROCAM) with the individual endurance capacity and heart rate variability (HRV) in a population-based sample of sedentary elderly. After stratification, in 57 men (48.1+/-9.5 yrs.) with an overall PROCAM-risk or =10% (50.8+/-5.6 points) cycle ergometries and short-term HRV analysis of time (RRMEAN, SDNN, RMSSD) and frequency domain parameters (LF, HF, TP, LF/HF) were conducted. Additionally the autonomic stress index (SI) was calculated. Nonparametric tests were used for statistical correlation analysis (Spearman rho) and group comparisons (Mann-Whitney). For endurance capacity [W/kg] (r=-0.469, pHRV analysis in risk stratification and outline the interrelation of a decreased exercise capacity and autonomic function with a raised individual 10-year cardiac risk. As an independent parameter of the vegetative regulatory state the stress index may contribute to an increased practical relevance of short-time HRV analysis.

  8. Similarity rules of thermal stratification phenomena for water and sodium

    International Nuclear Information System (INIS)

    Ohtsuka, M.; Ikeda, T.; Yamakawa, M.; Shibata, Y.; Moriya, S.; Ushijima, S.; Fujimoto, K.

    1988-01-01

    Similarity rules for thermal stratification phenomena were studied using sodium and water experiments with scaled cylindrical vessels. The vessel dimensions were identical to focus on the effect of differences in fluid properties upon the phenomena. Comparisons of test results between sodium and water elucidated similar and dissimilar characteristics for thermal stratification phenomena which appeared in the scaled vessels. Results were as follows: (1) The dimensionless upward velocity of the thermal stratification interface was proportional to Ri -0.74 for water and sodium during the period when the buoyancy effect was dominant. (2) Dimensionless temperature transient rate at the outlet slit decreased with Ri for sodium and remained constant for water where Ri>0.2. The applicability of the scaled test results to an actual power plant was also studied by using multi-dimensional numerical analysis which was verified by the water and sodium experiments. Water experiments could simulate liquid metal fast breeder reactor flows more accurately than sodium experiments for dimensionless temperature gradient at the thermal stratification interface and dimensionless temperature transient rate at the intermediate heat exchanger inlet

  9. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  10. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  11. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  12. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  13. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  14. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  15. Experiments and numerical simulations of fluctuating thermal stratification in a branch pipe

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Akira; Murase, Michio; Sasaki, Toru [Inst. of Nuclear Safety System Inc., Mihama, Fukui (Japan); Takenaka, Nobuyuki; Hamatani, Daisuke [Kobe Univ. (Japan)

    2002-09-01

    Many pipes branch off from the main pipe in plants. When the main flow in the main pipe is hotter than a branch pipe that branches off downward, the hot water penetrates into the branch pipe with the cavity flow that is induced by the main flow and causes thermal stratification. If the interface of the stratification fluctuates in an occluded branch pipe, thermal fatigue may occur in pipe wall. Some experiments and numerical simulations were conducted to elucidate the mechanism of this fluctuating thermal stratification. The vortex structures were observed in the experiments of straight or bent branch pipes. When the main flow was heated and the thermal stratification interface was at the elbow, a ''burst'' phenomenon occurred in the interface in connection with large heat fluctuation. The effects of pipe shape on the length of penetration were investigated in order to modify simulation conditions. The vortex structures and the fluctuating thermal stratification at elbow in the numerical simulation showed good agreement with experiments. (author)

  16. The bio-optical properties of CDOM as descriptor of lake stratification.

    Science.gov (United States)

    Bracchini, Luca; Dattilo, Arduino Massimo; Hull, Vincent; Loiselle, Steven Arthur; Martini, Silvia; Rossi, Claudio; Santinelli, Chiara; Seritti, Alfredo

    2006-11-01

    Multivariate statistical techniques are used to demonstrate the fundamental role of CDOM optical properties in the description of water masses during the summer stratification of a deep lake. PC1 was linked with dissolved species and PC2 with suspended particles. In the first principal component that the role of CDOM bio-optical properties give a better description of the stratification of the Salto Lake with respect to temperature. The proposed multivariate approach can be used for the analysis of different stratified aquatic ecosystems in relation to interaction between bio-optical properties and stratification of the water body.

  17. Experimental study on the fluid stratification mechanism in the density lock

    International Nuclear Information System (INIS)

    Gu Haifeng; Yan Changqi; Sun Licheng

    2009-01-01

    Visualized experiments were conducted on the forming process of stratification between hot and cold fluids in three tubes with different diameters. The results show that the working fluids were divided into three layers from top to bottom: convective, interfacial, and constant temperature layers. The working fluid in the convective layer always retains the property of a high rate of temperature increase. The rate of temperature increase in the interfacial layer gradually decreased from top to bottom and was less than that in the convective layer. The working fluid temperature in the constant-temperature layer remained stable. Based on the experimental study, we built a simplified theoretical model and analyzed the stratification mechanism. The results indicate the following stratification mechanism: because of the existence of the transition points in the heat transfer modes, the differences in the rates of temperature increase appear. These differences result in the appearance of fluid stratification. In addition, research on the process of stratification under different conditions tells us that the structure of the density lock influences the position of the transition point. The density lock with a structure of variable cross-sectional grids can effectively control the position of the transition points of the heat transfer modes. (author)

  18. On the gauge orbit space stratification: a review

    International Nuclear Information System (INIS)

    Rudolph, G.; Schmidt, M.; Volobuev, I.P.

    2002-01-01

    First, we review the basic mathematical structures and results concerning the gauge orbit space stratification. This includes general properties of the gauge group action, fibre bundle structures induced by this action, basic properties of the stratification and the natural Riemannian structures of the strata. In the second part, we study the stratification for theories with gauge group SU(n) in spacetime dimension 4. We develop a general method for determining the orbit types and their partial ordering, based on the 1-1 correspondence between orbit types and holonomy-induced Howe subbundles of the underlying principal SU(n)-bundle. We show that the orbit types are classified by certain cohomology elements of spacetime satisfying two relations and that the partial ordering is characterized by a system of algebraic equations. Moreover, operations for generating direct successors and direct predecessors are formulated, which allow one to construct the set of orbit types, starting from the principal type. Finally, we discuss an application to nodal configurations in Yang-Mills-Chern-Simons theory. (author)

  19. Integrated collector storage solar water heater: Temperature stratification

    International Nuclear Information System (INIS)

    Garnier, C.; Currie, J.; Muneer, T.

    2009-01-01

    An analysis of the temperature stratification inside an Integrated Collector Storage Solar Water Heater (ICS-SWH) was carried out. The system takes the form of a rectangular-shaped box incorporating the solar collector and storage tank into a single unit and was optimised for simulation in Scottish weather conditions. A 3-month experimental study on the ICS-SWH was undertaken in order to provide empirical data for comparison with the computed results. Using a previously developed macro model; a number of improvements were made. The initial macro model was able to generate corresponding water bulk temperature in the collector with a given hourly incident solar radiation, ambient temperature and inlet water temperature and therefore able to predict ICS-SWH performance. The new model was able to compute the bulk water temperature variation in different SWH collectors for a given aspect ratio and the water temperature along the height of the collector (temperature stratification). Computed longitudinal temperature stratification results obtained were found to be in close agreement with the experimental data.

  20. Rotating compressible fluids under strong stratification

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Lu, Y.; Novotný, A.

    2014-01-01

    Roč. 19, October (2014), s. 11-18 ISSN 1468-1218 Keywords : rotating fluid * compressible Navier-Stokes * strong stratification Subject RIV: BA - General Mathematics Impact factor: 2.519, year: 2014 http://www.sciencedirect.com/science/article/pii/S1468121814000212#

  1. Combustion stratification study of partially premixed combustion using Fourier transform analysis of OH* chemiluminescence images

    KAUST Repository

    Izadi Najafabadi, Mohammad

    2017-11-06

    A relatively high level of stratification (qualitatively: lack of homogeneity) is one of the main advantages of partially premixed combustion over the homogeneous charge compression ignition concept. Stratification can smooth the heat release rate and improve the controllability of combustion. In order to compare stratification levels of different partially premixed combustion strategies or other combustion concepts, an objective and meaningful definition of “stratification level” is required. Such a definition is currently lacking; qualitative/quantitative definitions in the literature cannot properly distinguish various levels of stratification. The main purpose of this study is to objectively define combustion stratification (not to be confused with fuel stratification) based on high-speed OH* chemiluminescence imaging, which is assumed to provide spatial information regarding heat release. Stratification essentially being equivalent to spatial structure, we base our definition on two-dimensional Fourier transforms of photographs of OH* chemiluminescence. A light-duty optical diesel engine has been used to perform the OH* bandpass imaging on. Four experimental points are evaluated, with injection timings in the homogeneous regime as well as in the stratified partially premixed combustion regime. Two-dimensional Fourier transforms translate these chemiluminescence images into a range of spatial frequencies. The frequency information is used to define combustion stratification, using a novel normalization procedure. The results indicate that this new definition, based on Fourier analysis of OH* bandpass images, overcomes the drawbacks of previous definitions used in the literature and is a promising method to compare the level of combustion stratification between different experiments.

  2. Acoustic study of stratification region of melts in In-Se system

    International Nuclear Information System (INIS)

    Glazov, V.M.; Kim, S.G.; Nurova, K.B.

    1989-01-01

    Stratification region of melts in In-Se system was studied in detail with the use of the method of measuring ultrasound velocity. The curve, limiting the region of stratification into two liquid solutions was plotted. It is shown that the curve is characterized as symmetrical finodal

  3. Methods to determine stratification efficiency of thermal energy storage processes–Review and theoretical comparison

    DEFF Research Database (Denmark)

    Haller, Michel; Cruickshank, Chynthia; Streicher, Wolfgang

    2009-01-01

    This paper reviews different methods that have been proposed to characterize thermal stratification in energy storages from a theoretical point of view. Specifically, this paper focuses on the methods that can be used to determine the ability of a storage to promote and maintain stratification...... during charging, storing and discharging, and represent this ability with a single numerical value in terms of a stratification efficiency for a given experiment or under given boundary conditions. Existing methods for calculating stratification efficiencies have been applied to hypothetical storage...

  4. Investigation of the Solvis stratification inlet pipe for solar tanks

    DEFF Research Database (Denmark)

    Andersen, Elsa; Jordan, Ulrike; Shah, Louise Jivan

    2004-01-01

    Since the 1960’ties the influence of the thermal stratification in hot water tanks on the thermal performance of solar heating systems has been studied intensively. It was found, that the thermal performance of a solar heating system is increasing for increasing thermal stratification in the hot...... water tank. The temperature of the storage water heated by the solar collector loop usually varies strongly during the day. In order to reach a good thermal stratification in the tank, different types of pipes, plates, diffusers and other devices have been investigated in the past (e.g. Loehrke, 1979...... conditions. Temperature measurements were carried out and an optical method called Particle Image Velocimetry (PIV) was used to visualize the flow around the flaps....

  5. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  6. Temperature Stratification in a Cryogenic Fuel Tank

    Data.gov (United States)

    National Aeronautics and Space Administration — A reduced dynamical model describing temperature stratification effects driven by natural convection in a liquid hydrogen cryogenic fuel tank has been developed. It...

  7. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  9. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  10. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  11. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  12. On Stratification in Changing Higher Education: The "Analysis of Status" Revisited

    Science.gov (United States)

    Bloch, Roland; Mitterle, Alexander

    2017-01-01

    This article seeks to shed light on current dynamics of stratification in changing higher education and proposes an analytical perspective to account for these dynamics based on Martin Trow's work on "the analysis of status." In research on higher education, the term "stratification" is generally understood as a metaphor that…

  13. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  14. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  15. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  16. Propagation of 3D internal gravity wave beams in a slowly varying stratification

    Science.gov (United States)

    Fan, Boyu; Akylas, T. R.

    2017-11-01

    The time-mean flows induced by internal gravity wave beams (IGWB) with 3D variations have been shown to have dramatic implications for long-term IGWB dynamics. While uniform stratifications are convenient both theoretically and in the laboratory, stratifications in the ocean can vary by more than an order of magnitude over the ocean depth. Here, in view of this fact, we study the propagation of a 3D IGWB in a slowly varying stratification. We assume that the stratification varies slowly relative to the local variations in the wave profile. In the 2D case, the IGWB bends in response to the changing stratification, but nonlinear effects are minor even in the finite amplitude regime. For a 3D IGWB, in addition to bending, we find that nonlinearity results in the transfer of energy from waves to a large-scale time-mean flow associated with the mean potential vorticity, similar to IGWB behavior in a uniform stratification. In a weakly nonlinear setting, we derive coupled evolution equations that govern this process. We also use these equations to determine the stability properties of 2D IGWB to 3D perturbations. These findings indicate that 3D effects may be relevant and possibly fundamental to IGWB dynamics in nature. Supported by NSF Grant DMS-1512925.

  17. A comparison of stratification effectiveness between the National Land Cover Data set and photointerpretation in western Oregon

    Science.gov (United States)

    Paul Dunham; Dale Weyermann; Dale Azuma

    2002-01-01

    Stratifications developed from National Land Cover Data (NLCD) and from photointerpretation (PI) were tested for effectiveness in reducing sampling error associated with estimates of timberland area and volume from FIA plots in western Oregon. Strata were created from NLCD through the aggregation of cover classes and the creation of 'edge' strata by...

  18. Thermal stratification in storage tanks of integrated collector storage solar water heaters

    International Nuclear Information System (INIS)

    Oshchepkov, M.Y.; Frid, S.E.

    2015-01-01

    To determine the influence of the shape of the tank, the installation angle, and the magnitude of the absorbed heat flux on thermal stratification in integrated collector-storage solar water heaters, numerical simulation of thermal convection in tanks of different shapes and same volume was carried out. Idealized two-dimensional models were studied; auto model stratification profiles were obtained at the constant heat flux. The shape of the tank, the pattern of the heat flux dynamics, the adiabatic mixing on the circulation rate and the degree of stratification were shown to have significant influence. (authors)

  19. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. An Improved Model of Cryogenic Propellant Stratification in a Rotating, Reduced Gravity Environment

    Science.gov (United States)

    Oliveira, Justin; Kirk, Daniel R.; Schallhorn, Paul A.; Piquero, Jorge L.; Campbell, Mike; Chase, Sukhdeep

    2007-01-01

    This paper builds on a series of analytical literature models used to predict thermal stratification within rocket propellant tanks. The primary contribution to the literature is to add the effect of tank rotation and to demonstrate the influence of rotation on stratification times and temperatures. This work also looks levels of thermal stratification for generic propellant tanks (cylindrical shapes) over a parametric range of upper-stage coast times, heating levels, rotation rates, and gravity levels.

  1. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  2. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  3. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  4. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  5. Characteristics of multiple auroral inverted-V structures and the problem of magnetospheric plasma stratification

    International Nuclear Information System (INIS)

    Antonova, E.E.; Stepanova, M.V.; Teltzov, M.V.; Tverskoy, B.A.

    1993-01-01

    The concept of hot stratification of magnetospheric plasma is presented. The stratification mechanism is based on the assumption that in the center of plasma sheet the pressure is approximately isotropic and under steady state conditions the gradient and curvature drift currents play the principal role. The number of formed structures is determined by the parameter of stratification. 7 figs., 2 tabs

  6. Thermal stratification built up in hot water tank with different inlet stratifiers

    DEFF Research Database (Denmark)

    Dragsted, Janne; Furbo, Simon; Dannemand, Mark

    2017-01-01

    Thermal stratification in a water storage tank can strongly increase the thermal performance of solar heating systems. Thermal stratification can be built up in a storage tank during charge, if the heated water enters through an inlet stratifier. Experiments with a test tank have been carried out...... in order to elucidate how well thermal stratification is established in the tank with differently designed inlet stratifiers under different controlled laboratory conditions. The investigated inlet stratifiers are from Solvis GmbH & Co KG and EyeCular Technologies ApS. The inlet stratifier from Solvis Gmb...... for Solvis GmbH & Co KG had a better performance at 4 l/min. In the intermediate charge test the stratifier from EyeCular Technologies ApS had a better performance in terms of maintaining the thermal stratification in the storage tank while charging with a relative low temperature. [All rights reserved...

  7. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  8. Glacial ocean circulation and stratification explained by reduced atmospheric temperature.

    Science.gov (United States)

    Jansen, Malte F

    2017-01-03

    Earth's climate has undergone dramatic shifts between glacial and interglacial time periods, with high-latitude temperature changes on the order of 5-10 °C. These climatic shifts have been associated with major rearrangements in the deep ocean circulation and stratification, which have likely played an important role in the observed atmospheric carbon dioxide swings by affecting the partitioning of carbon between the atmosphere and the ocean. The mechanisms by which the deep ocean circulation changed, however, are still unclear and represent a major challenge to our understanding of glacial climates. This study shows that various inferred changes in the deep ocean circulation and stratification between glacial and interglacial climates can be interpreted as a direct consequence of atmospheric temperature differences. Colder atmospheric temperatures lead to increased sea ice cover and formation rate around Antarctica. The associated enhanced brine rejection leads to a strongly increased deep ocean stratification, consistent with high abyssal salinities inferred for the last glacial maximum. The increased stratification goes together with a weakening and shoaling of the interhemispheric overturning circulation, again consistent with proxy evidence for the last glacial. The shallower interhemispheric overturning circulation makes room for slowly moving water of Antarctic origin, which explains the observed middepth radiocarbon age maximum and may play an important role in ocean carbon storage.

  9. Efficiency and precision for estimating timber and non-timber attributes using Landsat-based stratification methods in two-phase sampling in northwest California

    Science.gov (United States)

    Antti T. Kaartinen; Jeremy S. Fried; Paul A. Dunham

    2002-01-01

    Three Landsat TM-based GIS layers were evaluated as alternatives to conventional, photointerpretation-based stratification of FIA field plots. Estimates for timberland area, timber volume, and volume of down wood were calculated for California's North Coast Survey Unit of 2.5 million hectares. The estimates were compared on the basis of standard errors,...

  10. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  11. Economic Stratification Differentiates Home Gardens in the Maya Village of Pomuch, Mexico

    NARCIS (Netherlands)

    Poot-Pool, W.S.; Wal, van der J.C.; Flores-Guido, S.; Pat-Fernández, J.M.; Esparza-Olguín, L.

    2012-01-01

    Economic Stratification Differentiates Home Gardens in the Maya Village of Pomuch, Mexico. In this paper, we analyze if economic stratification of peasant families in a Maya village in the Yucatán Peninsula of Mexico influences species composition and structure of home gardens. Our general

  12. Theoretical and experimental studies of thermal stratification in hot and cold pools of PFBR

    International Nuclear Information System (INIS)

    Velusamy, K.; Titus, G.; Rajakumar, A.; Ravichandran, G.; Padmakumar, G.; Vaidyanathan, G.; Kale, R.D.; Chetal, S.C.; Bhoje, S.B.

    1994-01-01

    Results of experimental studies carried out in two water models of size 1/24 and 1/15, to assess the free level fluctuation in the hot pool of PFBR are presented. The results when extrapolated to the prototype gives a ripple height of 50 mm. The results of thermal stratification studies carried out in 1/24 scale model, using hot and cold water indicates that the interface velocity can be correlated with the Richardson number. The paper also gives the details of computer codes developed for the estimation of flow and temperature fields in the pools. (author)

  13. Analysis of ABC (D) stratification for screening patients with gastric cancer.

    Science.gov (United States)

    Kudo, Tomohiro; Kakizaki, Satoru; Sohara, Naondo; Onozato, Yasuhiro; Okamura, Shinichi; Inui, Yoshikatsu; Mori, Masatomo

    2011-11-21

    To evaluate the value of ABC (D) stratification [combination of serum pepsinogen and Helicobacter pylori (H. pylori) antibody] of patients with gastric cancer. Ninety-five consecutive patients with gastric cancer were enrolled into the study. The serum pepsinogen I (PG I)/pepsinogen II (PG II) and H. pylori antibody levels were measured. Patients were classified into five groups of ABC (D) stratification according to their serological status. Endoscopic findings of atrophic gastritis and histological differentiation were also analyzed in relation to the ABC (D) stratification. The mean patient age was (67.9 ± 8.9) years. Three patients (3.2%) were classified into group A, 7 patients (7.4%) into group A', 27 patients (28.4%) into group B, 54 patients (56.8%) into group C, and 4 patients (4.2%) into group D, respectively. There were only three cases in group A when the patients taking acid proton pump inhibitors and those who had undergone eradication therapy for H. pylori (group A') were excluded. These three cases had mucosal atrophy in the grey zone according to the diagnostic manual of ABC (D) stratification. Histologically, the mean age of the patients with well differentiated adenocarcinoma was significantly higher than that of the patients with poorly differentiated adenocarcinoma (P ABC (D) stratification is a good method for screening patients with gastric cancers. Endoscopy is needed for grey zone cases to check the extent of mucosal atrophy.

  14. Thermal stratification in sodium. Proceedings of an International Atomic Energy Agency specialists' meeting

    Energy Technology Data Exchange (ETDEWEB)

    Costa, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Grenoble, Grenoble (France)

    1983-07-01

    The purpose of the meeting was to discuss and exchange views on thermal stratification existing in sodium of the main vessel, secondary circuits and large components of LMFBRs under various operating conditions. The meeting was divided into four sessions: national position presentations; fundamental studies on theory and application of stratification problems, numerical and experimental investigations applied to stratified flow phenomena; computer codes for evaluation of thermal stratification; applied studies covering the computer codes and experimental studies for prediction of temperature velocity field.

  15. Thermal stratification in sodium. Proceedings of an International Atomic Energy Agency specialists' meeting

    International Nuclear Information System (INIS)

    Costa, J.

    1983-07-01

    The purpose of the meeting was to discuss and exchange views on thermal stratification existing in sodium of the main vessel, secondary circuits and large components of LMFBRs under various operating conditions. The meeting was divided into four sessions: national position presentations; fundamental studies on theory and application of stratification problems, numerical and experimental investigations applied to stratified flow phenomena; computer codes for evaluation of thermal stratification; applied studies covering the computer codes and experimental studies for prediction of temperature velocity field

  16. Stratification of a cityscape using census and land use variables for inventory of building materials

    Science.gov (United States)

    Rosenfield, G.H.; Fitzpatrick-Lins, K.; Johnson, T.L.

    1987-01-01

    A cityscape (or any landscape) can be stratified into environmental units using multiple variables of information. For the purposes of sampling building materials, census and land use variables were used to identify similar strata. In the Metropolitan Statistical Area of a cityscape, the census tract is the smallest unit for which census data are summarized and digitized boundaries are available. For purposes of this analysis, census data on total population, total number of housing units, and number of singleunit dwellings were aggregated into variables of persons per square kilometer and proportion of housing units in single-unit dwellings. The level 2 categories of the U.S. Geological Survey's land use and land cover data base were aggregated into variables of proportion of residential land with buildings, proportion of nonresidential land with buildings, and proportion of open land. The cityscape was stratified, from these variables, into environmental strata of Urban Central Business District, Urban Livelihood Industrial Commercial, Urban Multi-Family Residential, Urban Single Family Residential, Non-Urban Suburbanizing, and Non-Urban Rural. The New England region was chosen as a region with commonality of building materials, and a procedure developed for trial classification of census tracts into one of the strata. Final stratification was performed by discriminant analysis using the trial classification and prior probabilities as weights. The procedure was applied to several cities, and the results analyzed by correlation analysis from a field sample of building materials. The methodology developed for stratification of a cityscape using multiple variables has application to many other types of environmental studies, including forest inventory, hydrologic unit management, waste disposal, transportation studies, and other urban studies. Multivariate analysis techniques have recently been used for urban stratification in England. ?? 1987 Annals of Regional

  17. Structural evaluation method study and procedure development for pressurizer surge line subjected to thermal stratification phenomenon

    International Nuclear Information System (INIS)

    Zhang Yixiong; Yu Xiaofei; Ai Honglei

    2014-01-01

    Thermal stratification phenomenon of pressurizer surge line can lead potential threaten to plant safety. Base on the mechanism of thermal stratification occurrence, Fr number is used to judge whether the stratification occurs or not. Also the method of calculating heat transfer coefficient is investigated. Theoretically the 3-dimension thermal stress induced by thermal stratification is decoupled to 1-dimension global stress and 2-dimension local stress, and the complex 3-dimension problem is simplified into a combination of 1-dimension and 2-dimension to compute the stress. Comply with criterion RCC-M, the complete structure integrity evaluation is accomplished after combining the stress produced by thermal stratification and the stresses produced by the other loadings. In order to match the above combined analysis method, Code SYSTUS and ROCOCO are developed. By means of aforesaid evaluation method and corresponding analysis program, surge line thermal stratification of Qinshan Phase II Extension project is investigated in this paper. And the results show that structural integrity of the pressurizer surge line affected by thermal stratification still satisfies criterion RCC-M. (authors)

  18. Autonomous Observations of the Upper Ocean Stratification and Velocity Field about the Seasonality Retreating Marginal Ice Zone

    Science.gov (United States)

    2016-12-30

    fluxes of heat, salt, and momentum. Hourly GPS fixes tracked the motion of the supporting ice floes and T/C recorders sampled the ocean waters just... sampled in a range of ice conditions from full ice cover to nearly open water and observed a variety of stratification and ocean velocity signals (e.g...From - To) 12/30/2016 final 01-Nov-2011to 30-Sep-201 6 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Autonomous observations of the upper ocean

  19. Simulation of atmosphere stratification in the HDR test facility with the CONTAIN code

    International Nuclear Information System (INIS)

    Skerlavaj, A.; Mavko, B.; Kljenak, I.

    2001-01-01

    The test E11.2 'Hydrogen distribution in loop flow geometry', which was performed in the Heissdampf Reaktor containment test facility in Germany, was simulated with the CONTAIN computer code. The predicted pressure history and thermal stratification are in relatively good agreement with the measurements. The compositional stratification within the containment was qualitatively well predicted, although the degree of the stratification in the dome area was slightly underestimated. The analysis of simulation results enabled a better understanding of the physical phenomena during the test.(author)

  20. Social Stratification and Cooperative Behavior in Spatial Prisoners' Dilemma Games.

    Directory of Open Access Journals (Sweden)

    Peng Lu

    Full Text Available It has been a long-lasting pursuit to promote cooperation, and this study aims to promote cooperation via the combination of social stratification and the spatial prisoners' dilemma game. It is previously assumed that agents share the identical payoff matrix, but the stratification or diversity exists and exerts influences in real societies. Thus, two additional classes, elites and scoundrels, derive from and coexist with the existing class, commons. Three classes have different payoff matrices. We construct a model where agents play the prisoners' dilemma game with neighbors. It indicates that stratification and temptation jointly influence cooperation. Temptation permanently reduces cooperation; elites play a positive role in promoting cooperation while scoundrels undermine it. As the temptation getting larger and larger, elites play a more and more positive and critical role while scoundrels' negative effect becomes weaker and weaker, and it is more obvious when temptation goes beyond its threshold.

  1. Representing Reservoir Stratification in Land Surface and Earth System Models

    Science.gov (United States)

    Yigzaw, W.; Li, H. Y.; Leung, L. R.; Hejazi, M. I.; Voisin, N.; Payn, R. A.; Demissie, Y.

    2017-12-01

    A one-dimensional reservoir stratification modeling has been developed as part of Model for Scale Adaptive River Transport (MOSART), which is the river transport model used in the Accelerated Climate Modeling for Energy (ACME) and Community Earth System Model (CESM). Reservoirs play an important role in modulating the dynamic water, energy and biogeochemical cycles in the riverine system through nutrient sequestration and stratification. However, most earth system models include lake models that assume a simplified geometry featuring a constant depth and a constant surface area. As reservoir geometry has important effects on thermal stratification, we developed a new algorithm for deriving generic, stratified area-elevation-storage relationships that are applicable at regional and global scales using data from Global Reservoir and Dam database (GRanD). This new reservoir geometry dataset is then used to support the development of a reservoir stratification module within MOSART. The mixing of layers (energy and mass) in the reservoir is driven by eddy diffusion, vertical advection, and reservoir inflow and outflow. Upstream inflow into a reservoir is treated as an additional source/sink of energy, while downstream outflow represented a sink. Hourly atmospheric forcing from North American Land Assimilation System (NLDAS) Phase II and simulated daily runoff by ACME land component are used as inputs for the model over the contiguous United States for simulations between 2001-2010. The model is validated using selected observed temperature profile data in a number of reservoirs that are subject to various levels of regulation. The reservoir stratification module completes the representation of riverine mass and heat transfer in earth system models, which is a major step towards quantitative understanding of human influences on the terrestrial hydrological, ecological and biogeochemical cycles.

  2. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  3. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  4. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  5. Effects of Mixture Stratification on Combustion and Emissions of Boosted Controlled Auto-Ignition Engines

    Directory of Open Access Journals (Sweden)

    Jacek Hunicz

    2017-12-01

    Full Text Available The stratification of in-cylinder mixtures appears to be an effective method for managing the combustion process in controlled auto-ignition (CAI engines. Stratification can be achieved and controlled using various injection strategies such as split fuel injection and the introduction of a portion of fuel directly before the start of combustion. This study investigates the effect of injection timing and the amount of fuel injected for stratification on the combustion and emissions in CAI engine. The experimental research was performed on a single cylinder engine with direct gasoline injection. CAI combustion was achieved using negative valve overlap and exhaust gas trapping. The experiments were performed at constant engine fueling. Intake boost was applied to control the excess air ratio. The results show that the application of the late injection strategy has a significant effect on the heat release process. In general, the later the injection is and the more fuel is injected for stratification, the earlier the auto-ignition occurs. However, the experimental findings reveal that the effect of stratification on combustion duration is much more complex. Changes in combustion are reflected in NOX emissions. The attainable level of stratification is limited by the excessive emission of unburned hydrocarbons, CO and soot.

  6. Numerical investigation on thermal stratification and striping phenomena in various coolants

    International Nuclear Information System (INIS)

    Zumao Yang; Muramatsu, Toshiharu

    2000-02-01

    It is important to study thermal stratification and striping phenomena for they can induce thermal fatigue failure of structures. This presentation uses the AQUA code, which has been developed in Japan Nuclear Cycle Development Institute (JNC), to investigate the characteristics of these thermal phenomena in water, liquid sodium, liquid lead and carbon dioxide gas. There are altogether eight calculated cases with same Richardson number and initial inlet hot velocity in thermal stratification calculations, in which four cases have same velocity difference between inlet hot and cold fluid, the other four cases with same temperature difference. The calculated results show: (1) The fluid's properties and initial conditions have considerable effects on thermal stratification, which is decided by the combination of such as thermal conduction, viscous dissipation and buoyant force, etc., and (2) The gas has distinctive thermal stratification characteristics from those of liquid because for horizontal flow in the transportation of momentum and energy, the drastic exchange usually happens at the hot-cold interface for liquid, however, the buoyancy and natural convection make the quick exchange position depart from the hot-cold interface for gas. In thermal striping analysis, only the first step work has been finished. The calculated results show: (1) the vertical flow has some difference in thermal stratification characteristics from those of horizontal flow, and (2) For deep thermal striping analysis in the calculated area, more attention should be paid to the center area along Z-direction for liquid and small velocity area for gas. (author)

  7. Vertical mixing and coherent anticyclones in the ocean: the role of stratification

    Directory of Open Access Journals (Sweden)

    I. Koszalka

    2010-01-01

    Full Text Available The role played by wind-forced anticyclones in the vertical transport and mixing at the ocean mesoscale is investigated with a primitive-equation numerical model in an idealized configuration. The focus of this work is to determine how the stratification impacts such transport.

    The flows, forced only at the surface by an idealized wind forcing, are predominantly horizontal and, on average, quasigeostrophic. Inside vortex cores and intense filaments, however, the dynamics is strongly ageostrophic.

    Mesoscale anticyclones appear as "islands" of increased penetration of wind energy into the ocean interior and they represent the maxima of available potential energy. The amount of available potential energy is directly correlated with the degree of stratification.

    The wind energy injected at the surface is transferred at depth through the generation and subsequent straining effect of Vortex Rossby Waves (VRWs, and through near-inertial internal oscillations trapped inside anticyclonic vortices. Both these mechanisms are affected by stratification. Stronger transfer but larger confinement close to the surface is found when the stratification is stronger. For weaker stratification, vertical mixing close to the surface is less intense but below about 150 m attains substantially higher values due to an increased contribution of both VRWs, whose time scale is on the order of few days, and of near-inertial motions, with a time scale of few hours.

  8. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  9. Implementing system-wide risk stratification approaches: A review of critical success and failure factors.

    Science.gov (United States)

    Huckel Schneider, Carmen; Gillespie, James A; Wilson, Andrew

    2017-05-01

    Risk stratification has become a widely used tool for linking people identified at risk of health deterioration to the most appropriate evidence-based care. This article systematically reviews recent literature to determine key factors that have been identified as critical enablers and/or barriers to successful implementation of risk stratification tools at a system level. A systematic search found 23 articles and four promising protocols for inclusion in the review, covering the use to 20 different risk stratification tools. These articles reported on only a small fraction of the risk stratification tools used in health systems; suggesting that while the development and statistical validation of risk stratification algorithms is widely reported, there has been little published evaluation of how they are implemented in real-world settings. Controlled studies provided some evidence that the use of risk stratification tools in combination with a care management plan offer patient benefits and that the use of a risk stratification tool to determine components of a care management plan may contribute to reductions in hospital readmissions, patient satisfaction and improved patient outcomes. Studies with the strongest focus on implementation used qualitative and case study methods. Among these, the literature converged on four key areas of implementation that were found to be critical for overcoming barriers to success: the engagement of clinicians and safeguarding equity, both of which address barriers of acceptance; the health system context to address administrative, political and system design barriers; and data management and integration to address logistical barriers.

  10. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  11. Thermal stratification in a scaled-down suppression pool of the Fukushima Daiichi nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Byeongnam, E-mail: jo@vis.t.u-tokyo.ac.jp [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Erkan, Nejdet [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Takahashi, Shinji [Department of Nuclear Engineering and Management, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Song, Daehun [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Hyundai and Kia Corporate R& D Division, Hyundai Motors, 772-1, Jangduk-dong, Hwaseong-Si, Gyeonggi-Do 445-706 (Korea, Republic of); Sagawa, Wataru; Okamoto, Koji [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan)

    2016-08-15

    Highlights: • Thermal stratification was reproduced in a scaled-down suppression pool of the Fukushima Daiichi nuclear power plants. • Horizontal temperature profiles were uniform in the toroidal suppression pool. • Subcooling-steam flow rate map of thermal stratification was obtained. • Steam bubble-induced flow model in suppression pool was suggested. • Bubble frequency strongly depends on the steam flow rate. - Abstract: Thermal stratification in the suppression pool of the Fukushima Daiichi nuclear power plants was experimentally investigated in sub-atmospheric pressure conditions using a 1/20 scale torus shaped setup. The thermal stratification was reproduced in the scaled-down suppression pool and the effect of the steam flow rate on different thermal stratification behaviors was examined for a wide range of steam flow rates. A sparger-type steam injection pipe that emulated Fukushima Daiichi Unit 3 (F1U3) was used. The steam was injected horizontally through 132 holes. The development (formation and disappearance) of thermal stratification was significantly affected by the steam flow rate. Interestingly, the thermal stratification in the suppression pool vanished when subcooling became lower than approximately 5 °C. This occurred because steam bubbles are not well condensed at low subcooling temperatures; therefore, those bubbles generate significant upward momentum, leading to mixing of the water in the suppression pool.

  12. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  13. Novel biomarkers for risk stratification in pulmonary arterial hypertension

    Directory of Open Access Journals (Sweden)

    Thomas Zelniker

    2015-10-01

    Full Text Available Risk stratification in pulmonary arterial hypertension (PAH is paramount to identifying individuals at highest risk of death. So far, there are only limited parameters for prognostication in patients with PAH. 95 patients with confirmed PAH were included in the present analysis and followed for a total of 4 years. Blood samples were analysed for serum levels of N-terminal pro-brain natriuretic peptide, high-sensitivity troponin T (hsTnT, pro-atrial natriuretic peptide (proANP, growth differentiation factor 15, soluble fms-like tyrosine kinase 1 and placental growth factor. 27 (28.4% patients died during a follow-up of 4 years. Levels of all tested biomarkers, except for placental growth factor, were significantly elevated in nonsurvivors compared with survivors. Receiver operating characteristic analyses demonstrated that cardiac biomarkers had the highest power in predicting mortality. In particular, proANP exhibited the highest area under the curve, followed by N-terminal pro-brain natriuretic peptide and hsTnT. Furthermore, proANP and hsTnT added significant additive prognostic value to the established markers in categorical and continuous net reclassification index. Moreover, after Cox regression, proANP (hazard ratio (HR 1.91, hsTnT (HR 1.41, echocardiographic right ventricular impairment (HR 1.30 and 6-min walk test (HR 0.97 per 10 m remained the only significant parameters in prognostication of mortality. Our data suggest benefits of the implementation of proANP and hsTnT as additive biomarkers for risk stratification in patients with PAH.

  14. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  15. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  16. The formation of low-angle eolian stratification through the migration of protodunes

    Science.gov (United States)

    Ewing, R. C.; Phillips, J. D.; Weymer, B. A.; Barrineaux, P.; Bowling, R.; Nittrouer, J. A.

    2017-12-01

    Protodunes are low-relief, slipfaceless migrating bed forms that represent the emergent form of eolian sand dunes. Protodunes develop as cm-scale topography out of a flat bed of sand and evolve spatially and temporally into dunes with angle-of-repose slipfaces. Protodunes at White Sands Dune Field in New Mexico form at the upwind, trailing margin of the field, on dune stoss slopes, and in interdune areas. Here we analyze protodunes at the upwind margin of White Sands by coupling 200 mHz ground penetrating radar (GPR) with time-series high-resolution topography to characterize the origin and evolution of protodune stratification and the stratigraphic transition into fully developed dunes. We surveyed a 780m transect in the resultant transport direction of the dune field from SW to NE from sand patches through protodunes and into the first dune. We used airborne lidar surveys and structure-from-motion photogrammetry from 2007, 2008, 2009, 2010, 2015, and 2016. We find that protodune stratification forms at angles between 0-10 degrees by protodune migration. Dip angles increase as protodune amplitude increases along the transect. Accumulation of low-angle stratification increases across the first 650m and ranges from none to subcritical. Nearly aggradational accumulation of low-angle stratification occurs over the last 100m and is a precursor to angle-of-repose slipface formation. The origins of the aggradation and slipface development appear to be linked to protodune merging, dune interactions, and possibly to the development of a dune field-scale boundary layer. Protodunes and the formation of low-angle stratification at the upwind margin of White Sands are a good analog to the initiation of dune field development from sand sheets and the formation of low-angle stratification found at the base of eolian successions in the stratigraphic record.

  17. External validation of scoring systems in risk stratification of upper gastrointestinal bleeding.

    Science.gov (United States)

    Anchu, Anna Cherian; Mohsina, Subair; Sureshkumar, Sathasivam; Mahalakshmy, T; Kate, Vikram

    2017-03-01

    The aim of this study was to externally validate the four commonly used scoring systems in the risk stratification of patients with upper gastrointestinal bleed (UGIB). Patients of UGIB who underwent endoscopy within 24 h of presentation were stratified prospectively using the pre-endoscopy Rockall score (PRS) >0, complete Rockall score (CRS) >2, Glasgow Blatchford bleeding scores (GBS) >3, and modified GBS (m-GBS) >3 scores. Patients were followed up to 30 days. Prognostic accuracy of the scores was done by comparing areas under curve (AUC) in terms of overall risk stratification, re-bleeding, mortality, need for intervention, and length of hospitalization. One hundred and seventy-five patients were studied. All four scores performed better in the overall risk stratification on AUC [PRS = 0.566 (CI: 0.481-0.651; p-0.043)/CRS = 0.712 (CI: 0.634-0.790); p0.001); m-GBS = 0.802 (CI: 0.734-0.871; pbleed [AUC-0.679 (CI: 0.579-0.780; p = 0.003)]. All the scoring systems except PRS were found to be significantly better in detecting 30-day mortality with a high AUC (CRS = 0.798; p-0.042)/GBS = 0.833; p-0.023); m-GBS = 0.816; p-0.031). All four scores demonstrated significant accuracy in the risk stratification of non-variceal patients; however, only GBS and m-GBS were significant in variceal etiology. Higher cutoff scores achieved better sensitivity/specificity [RS > 0 (50/60.8), CRS > 1 (87.5/50.6), GBS > 7 (88.5/63.3), m-GBS > 7(82.3/72.6)] in the risk stratification. GBS and m-GBS appear to be more valid in risk stratification of UGIB patients in this region. Higher cutoff values achieved better predictive accuracy.

  18. Fatigue of LMFBR piping due to flow stratification

    International Nuclear Information System (INIS)

    Woodward, W.S.

    1983-01-01

    Flow stratification due to reverse flow was simulated in a 1/5-scale water model of a LMFBR primary pipe loop. The stratified flow was observed to have a dynamic interface region which oscillated in a wave pattern. The behavior of the interface was characterized in terms of location, local temperature fluctuation and duration for various reverse flow conditions. A structural assessment was performed to determine the effects of stratified flow on the fatigue life of the pipe. Both the static and dynamic aspects of flow stratification were examined. The dynamic interface produces thermal striping on the inside of the pipe wall which is shown to have the most deleterious effect on the pipe wall and produce significant fatigue damage relative to a static interface

  19. Fatigue of LMFBR piping due to flow stratification

    Energy Technology Data Exchange (ETDEWEB)

    Woodward, W.S.

    1983-01-01

    Flow stratification due to reverse flow was simulated in a 1/5-scale water model of a LMFBR primary pipe loop. The stratified flow was observed to have a dynamic interface region which oscillated in a wave pattern. The behavior of the interface was characterized in terms of location, local temperature fluctuation and duration for various reverse flow conditions. A structural assessment was performed to determine the effects of stratified flow on the fatigue life of the pipe. Both the static and dynamic aspects of flow stratification were examined. The dynamic interface produces thermal striping on the inside of the pipe wall which is shown to have the most deleterious effect on the pipe wall and produce significant fatigue damage relative to a static interface.

  20. Formulation parameters influencing self-stratification of coatings

    NARCIS (Netherlands)

    Vink, P.; Bots, T.L.

    1996-01-01

    Research was carried out aimed at the development of self-stratifying paints for steel which after application during film formation spontaneously form two well established layers of primer and top coat. The parameters affecting stratification were investigated for combinations of epoxy resins and

  1. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  2. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  3. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  4. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  5. Investigating Summer Thermal Stratification in Lake Ontario

    Science.gov (United States)

    James, S. C.; Arifin, R. R.; Craig, P. M.; Hamlet, A. F.

    2017-12-01

    Seasonal temperature variations establish strong vertical density gradients (thermoclines) between the epilimnion and hypolimnion. Accurate simulation of vertical mixing and seasonal stratification of large lakes is a crucial element of the thermodynamic coupling between lakes and the atmosphere in integrated models. Time-varying thermal stratification patterns can be accurately simulated with the versatile Environmental Fluid Dynamics Code (EFDC). Lake Ontario bathymetry was interpolated onto a 2-km-resolution curvilinear grid with vertical layering using a new approach in EFDC+, the so-called "sigma-zed" coordinate system which allows the number of vertical layers to be varied based on water depth. Inflow from the Niagara River and outflow to the St. Lawrence River in conjunction with hourly meteorological data from seven local weather stations plus three-hourly data from the North American Regional Reanalysis govern the hydrodynamic and thermodynamic responses of the Lake. EFDC+'s evaporation algorithm was updated to more accurately simulate net surface heat fluxes. A new vertical mixing scheme from Vinçon-Leite that implements different eddy diffusivity formulations above and below the thermocline was compared to results from the original Mellor-Yamada vertical mixing scheme. The model was calibrated by adjusting solar-radiation absorption coefficients in addition to background horizontal and vertical mixing parameters. Model skill was evaluated by comparing measured and simulated vertical temperature profiles at shallow (20 m) and deep (180 m) locations on the Lake. These model improvements, especially the new sigma-zed vertical discretization, accurately capture thermal-stratification patterns with low root-mean-squared errors when using the Vinçon-Leite vertical mixing scheme.

  6. Determining the core stratification in white dwarfs with asteroseismology

    Directory of Open Access Journals (Sweden)

    Charpinet S.

    2017-01-01

    Full Text Available Using the forward modeling approach and a new parameterization for the core chemical stratification in ZZ Ceti stars, we test several situations typical of the usually limited constraints available, such as small numbers of observed independent modes, to carry out asteroseismology of these stars. We find that, even with a limited number of modes, the core chemical stratification (in particular, the location of the steep chemical transitions expected in the oxygen profile can be determined quite precisely due to the significant sensitivity of some confined modes to partial reflexion (trapping effects. These effects are similar to the well known trapping induced by the shallower chemical transitions at the edge of the core and at the bottom of the H-rich envelope. We also find that success to unravel the core structure depends on the information content of the available seismic data. In some cases, it may not be possible to isolate a unique, well-defined seismic solution and the problem remains degenerate. Our results establish that constraining the core chemical stratification in white dwarf stars based solely on asteroseismology is possible, an opportunity that we have started to exploit.

  7. Coolant stratification and its thermohydrodynamic specificity under natural circulation in horizontal steam generator collectors

    Energy Technology Data Exchange (ETDEWEB)

    Blagovechtchenski, A.; Leontieva, V.; Mitriukhin, A. [Saint-Petersburg Technical Univ. (Russian Federation)

    1997-12-31

    The experiments and the test facilities for the study of the stratification phenomenon in the hot plenum of reactor and the upper parts of the steam generator collectors in a nuclear power plant are described. The aim of the experiments was to define the conditions of the stratification initiation, to study the temperature field in the upper part, the definition of the characteristics in the stratification layer, and also to study the factors which cause the intensity of the stagnant volume cooling.

  8. Coolant stratification and its thermohydrodynamic specificity under natural circulation in horizontal steam generator collectors

    Energy Technology Data Exchange (ETDEWEB)

    Blagovechtchenski, A; Leontieva, V; Mitriukhin, A [Saint-Petersburg Technical Univ. (Russian Federation)

    1998-12-31

    The experiments and the test facilities for the study of the stratification phenomenon in the hot plenum of reactor and the upper parts of the steam generator collectors in a nuclear power plant are described. The aim of the experiments was to define the conditions of the stratification initiation, to study the temperature field in the upper part, the definition of the characteristics in the stratification layer, and also to study the factors which cause the intensity of the stagnant volume cooling.

  9. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  10. Risk stratification in emergency patients by copeptin

    DEFF Research Database (Denmark)

    Iversen, Kasper; Gøtze, Jens P; Dalsgaard, Morten

    2014-01-01

    BACKGROUND: Rapid risk stratification is a core task in emergency medicine. Identifying patients at high and low risk shortly after admission could help clinical decision-making regarding treatment, level of observation, allocation of resources and post discharge follow-up. The purpose of the pre...

  11. Quantitative risk stratification in Markov chains with limiting conditional distributions.

    Science.gov (United States)

    Chan, David C; Pollett, Philip K; Weinstein, Milton C

    2009-01-01

    Many clinical decisions require patient risk stratification. The authors introduce the concept of limiting conditional distributions, which describe the equilibrium proportion of surviving patients occupying each disease state in a Markov chain with death. Such distributions can quantitatively describe risk stratification. The authors first establish conditions for the existence of a positive limiting conditional distribution in a general Markov chain and describe a framework for risk stratification using the limiting conditional distribution. They then apply their framework to a clinical example of a treatment indicated for high-risk patients, first to infer the risk of patients selected for treatment in clinical trials and then to predict the outcomes of expanding treatment to other populations of risk. For the general chain, a positive limiting conditional distribution exists only if patients in the earliest state have the lowest combined risk of progression or death. The authors show that in their general framework, outcomes and population risk are interchangeable. For the clinical example, they estimate that previous clinical trials have selected the upper quintile of patient risk for this treatment, but they also show that expanded treatment would weakly dominate this degree of targeted treatment, and universal treatment may be cost-effective. Limiting conditional distributions exist in most Markov models of progressive diseases and are well suited to represent risk stratification quantitatively. This framework can characterize patient risk in clinical trials and predict outcomes for other populations of risk.

  12. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    sample size is very large, especially if any sub-national stratification of estimates is required.

  13. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  14. The effect of surface albedo and grain size distribution on ...

    African Journals Online (AJOL)

    Sand dams are very useful in arid and semi arid lands (ASALs) as facilities for water storage and conservation. Soils in ASALs are mainly sandy and major water loss is by evaporation and infiltration. This study investigated the effect of sand media characteristics, specifically surface albedo, grain size and stratification on ...

  15. Dependence of offshore wind turbine fatigue loads on atmospheric stratification

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.; Ott, Søren

    2014-01-01

    The stratification of the atmospheric boundary layer (ABL) is classified in terms of the M-O length and subsequently used to determine the relationship between ABL stability and the fatigue loads of a wind turbine located inside an offshore wind farm. Recorded equivalent fatigue loads, representi...... conditions. In general, impact of ABL stratification is clearly seen on wake affected inflow cases for both blade and tower fatigue loads. However, the character of this dependence varies significantly with the type of inflow conditions – e.g. single wake inflow or multiple wake inflow....

  16. Response of water temperatures and stratification to changing climate in three lakes with different morphometry

    Science.gov (United States)

    Magee, Madeline R.; Wu, Chin H.

    2017-12-01

    Water temperatures and stratification are important drivers for ecological and water quality processes within lake systems, and changes in these with increases in air temperature and changes to wind speeds may have significant ecological consequences. To properly manage these systems under changing climate, it is important to understand the effects of increasing air temperatures and wind speed changes in lakes of different depths and surface areas. In this study, we simulate three lakes that vary in depth and surface area to elucidate the effects of the observed increasing air temperatures and decreasing wind speeds on lake thermal variables (water temperature, stratification dates, strength of stratification, and surface heat fluxes) over a century (1911-2014). For all three lakes, simulations showed that epilimnetic temperatures increased, hypolimnetic temperatures decreased, the length of the stratified season increased due to earlier stratification onset and later fall overturn, stability increased, and longwave and sensible heat fluxes at the surface increased. Overall, lake depth influences the presence of stratification, Schmidt stability, and differences in surface heat flux, while lake surface area influences differences in hypolimnion temperature, hypolimnetic heating, variability of Schmidt stability, and stratification onset and fall overturn dates. Larger surface area lakes have greater wind mixing due to increased surface momentum. Climate perturbations indicate that our larger study lakes have more variability in temperature and stratification variables than the smaller lakes, and this variability increases with larger wind speeds. For all study lakes, Pearson correlations and climate perturbation scenarios indicate that wind speed has a large effect on temperature and stratification variables, sometimes greater than changes in air temperature, and wind can act to either amplify or mitigate the effect of warmer air temperatures on lake thermal

  17. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  18. Monitoring of coolant temperature stratification on piping components in WWER-440 NPPs

    International Nuclear Information System (INIS)

    Hudcovsky, S.; Slanina, M.; Badiar, S.

    2001-01-01

    The presentation deals with the aims of non-standard temperature measurements installed on primary and secondary circuit in WWER-440 NPPs, explains reasons of coolant temperature stratification on the piping components. It describes methods of the measurements on pipings, range of installation of the temperature measurements in EBO and EMO units and illustrates results of measurements of coolant temperature stratification. (Authors)

  19. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  20. Risk stratification of gallbladder polyps (1-2 cm) for surgical intervention with 18F-FDG PET/CT.

    Science.gov (United States)

    Lee, Jaehoon; Yun, Mijin; Kim, Kyoung-Sik; Lee, Jong-Doo; Kim, Chun K

    2012-03-01

    We assessed the value of (18)F-FDG uptake in the gallbladder polyp (GP) in risk stratification for surgical intervention and the optimal cutoff level of the parameters derived from GP (18)F-FDG uptake for differentiating malignant from benign etiologies in a select, homogeneous group of patients with 1- to 2-cm GPs. Fifty patients with 1- to 2-cm GPs incidentally found on the CT portion of PET/CT were retrospectively analyzed. All patients had histologic diagnoses. GP (18)F-FDG activity was visually scored positive (≥liver) or negative (L ratio) were also measured. Univariate and multivariate logistic regression analyses were performed to determine the utility of patient and clinical variables--that is, sex, age, gallstone, polyp size, and three (18)F-FDG-related parameters in risk stratification. Twenty GPs were classified as malignant and 30 as benign. Multivariate analyses showed that the age and all parameters (visual criteria, SUVgp, and GP/L) related to (18)F-FDG uptake were significant risk factors, with the GP/L being the most significant. The sex, size of GPs, and presence of concurrent gallstones were found to be insignificant. (18)F-FDG uptake in a GP is a strong risk factor that can be used to determine the necessity of surgical intervention more effectively than other known risk factors. However, all criteria derived from (18)F-FDG uptake presented in this series may be applicable to the assessment of 1- to 2-cm GPs.

  1. Seasonal variations of the upper ocean salinity stratification in the Tropics

    Science.gov (United States)

    Maes, Christophe; O'Kane, Terence J.

    2014-03-01

    In comparison to the deep ocean, the upper mixed layer is a region typically characterized by substantial vertical gradients in water properties. Within the Tropics, the rich variability in the vertical shapes and forms that these structures can assume through variation in the atmospheric forcing results in a differential effect in terms of the temperature and salinity stratification. Rather than focusing on the strong halocline above the thermocline, commonly referred to as the salinity barrier layer, the present study takes into account the respective thermal and saline dependencies in the Brunt-Väisälä frequency (N2) in order to isolate the specific role of the salinity stratification in the layers above the main pycnocline. We examine daily vertical profiles of temperature and salinity from an ocean reanalysis over the period 2001-2007. We find significant seasonal variations in the Brunt-Väisälä frequency profiles are limited to the upper 300 m depth. Based on this, we determine the ocean salinity stratification (OSS) to be defined as the stabilizing effect (positive values) due to the haline part of N2 averaged over the upper 300 m. In many regions of the tropics, the OSS contributes 40-50% to N2 as compared to the thermal stratification and, in some specific regions, exceeds it for a few months of the seasonal cycle. Away from the tropics, for example, near the centers of action of the subtropical gyres, there are regions characterized by the permanent absence of OSS. In other regions previously characterized with salinity barrier layers, the OSS obviously shares some common variations; however, we show that where temperature and salinity are mixed over the same depth, the salinity stratification can be significant. In addition, relationships between the OSS and the sea surface salinity are shown to be well defined and quasilinear in the tropics, providing some indication that in the future, analyses that consider both satellite surface salinity

  2. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  3. Dependence of offshore wind turbine fatigue loads on atmospheric stratification

    International Nuclear Information System (INIS)

    Hansen, K S; Larsen, G C; Ott, S

    2014-01-01

    The stratification of the atmospheric boundary layer (ABL) is classified in terms of the M-O length and subsequently used to determine the relationship between ABL stability and the fatigue loads of a wind turbine located inside an offshore wind farm. Recorded equivalent fatigue loads, representing blade-bending and tower bottom bending, are combined with the operational statistics from the instrumented wind turbine as well as with meteorological statistics defining the inflow conditions. Only a part of all possible inflow conditions are covered through the approximately 8200 hours of combined measurements. The fatigue polar has been determined for an (almost) complete 360° inflow sector for both load sensors, representing mean wind speeds below and above rated wind speed, respectively, with the inflow conditions classified into three different stratification regimes: unstable, neutral and stable conditions. In general, impact of ABL stratification is clearly seen on wake affected inflow cases for both blade and tower fatigue loads. However, the character of this dependence varies significantly with the type of inflow conditions – e.g. single wake inflow or multiple wake inflow

  4. Simulation benchmark based on THAI-experiment on dissolution of a steam stratification by natural convection

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, M., E-mail: freitag@becker-technologies.com; Schmidt, E.; Gupta, S.; Poss, G.

    2016-04-01

    Highlights: . • We studied the generation and dissolution of steam stratification in natural convection. • We performed a computer code benchmark including blind and open phases. • The dissolution of stratification predicted only qualitatively by LP and CFD models during the blind simulation phase. - Abstract: Locally enriched hydrogen as in stratification may contribute to early containment failure in the course of severe nuclear reactor accidents. During accident sequences steam might accumulate as well to stratifications which can directly influence the distribution and ignitability of hydrogen mixtures in containments. An international code benchmark including Computational Fluid Dynamics (CFD) and Lumped Parameter (LP) codes was conducted in the frame of the German THAI program. Basis for the benchmark was experiment TH24.3 which investigates the dissolution of a steam layer subject to natural convection in the steam-air atmosphere of the THAI vessel. The test provides validation data for the development of CFD and LP models to simulate the atmosphere in the containment of a nuclear reactor installation. In test TH24.3 saturated steam is injected into the upper third of the vessel forming a stratification layer which is then mixed by a superposed thermal convection. In this paper the simulation benchmark will be evaluated in addition to the general discussion about the experimental transient of test TH24.3. Concerning the steam stratification build-up and dilution of the stratification, the numerical programs showed very different results during the blind evaluation phase, but improved noticeable during open simulation phase.

  5. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Variability of stratification according to operation of the tidal power plant in Lake Sihwa, South Korea.

    Science.gov (United States)

    Woo, S. B.; Song, J. I.; Jang, T. H.; Park, C. J.; Kwon, H. K.

    2017-12-01

    Artificial forcing according to operation of the tidal power plant (TPP) affects the physical environmental changes near the power plant. Strong turbulence by generation is expected to change the stratification structure of the Lake Sihwa inside. In order to examine the stratification changes by the power plant operation, ship bottom mounted observation were performed for 13 hours using an acoustic Doppler current profiler (ADCP) and Conductivity-Temperature-Depth (CTD) in Lake Sihwa at near TPP. The strong stratification in Sihwa Lake is maintained before TPP operation. The absence of external forces and freshwater inflow from the land forms the stratification in the Lake. Strong winds in a stratification statement lead to two-layer circulation. After wind event, multi-layer velocity structure is formed which lasted for approximately 4 h. After TPP operation, the jet flow was observed in entire water column at the beginning of the power generation. Vortex is formed by strong jet flow and maintained throughout during power generation period. Strong turbulence flow is generated by the turbine blades, enhancing vertical mixing. External forces, which dominantly affect Lake Sihwa, have changed from the wind to the turbulent flow. The stratification was extinguished by strong turbulent flow and becomes fully-mixed state. Changes in stratification structure are expected to affect material transport and ecological environment change continuously.

  7. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  8. Influence of precipitating light elements on stable stratification below the core/mantle boundary

    Science.gov (United States)

    O'Rourke, J. G.; Stevenson, D. J.

    2017-12-01

    Stable stratification below the core/mantle boundary is often invoked to explain anomalously low seismic velocities in this region. Diffusion of light elements like oxygen or, more slowly, silicon could create a stabilizing chemical gradient in the outermost core. Heat flow less than that conducted along the adiabatic gradient may also produce thermal stratification. However, reconciling either origin with the apparent longevity (>3.45 billion years) of Earth's magnetic field remains difficult. Sub-isentropic heat flow would not drive a dynamo by thermal convection before the nucleation of the inner core, which likely occurred less than one billion years ago and did not instantly change the heat flow. Moreover, an oxygen-enriched layer below the core/mantle boundary—the source of thermal buoyancy—could establish double-diffusive convection where motion in the bulk fluid is suppressed below a slowly advancing interface. Here we present new models that explain both stable stratification and a long-lived dynamo by considering ongoing precipitation of magnesium oxide and/or silicon dioxide from the core. Lithophile elements may partition into iron alloys under extreme pressure and temperature during Earth's formation, especially after giant impacts. Modest core/mantle heat flow then drives compositional convection—regardless of thermal conductivity—since their solubility is strongly temperature-dependent. Our models begin with bulk abundances for the mantle and core determined by the redox conditions during accretion. We then track equilibration between the core and a primordial basal magma ocean followed by downward diffusion of light elements. Precipitation begins at a depth that is most sensitive to temperature and oxygen abundance and then creates feedbacks with the radial thermal and chemical profiles. Successful models feature a stable layer with low seismic velocity (which mandates multi-component evolution since a single light element typically

  9. Observed variations in stratification and currents in the Zuari estuary, west coast of India

    Digital Repository Service at National Institute of Oceanography (India)

    Sundar, D.; Unnikrishnan, A.S.; Michael, G.S.; Kankonkar, A.; Nidheesh, A.G.; Subeesh, M.P.

    in stratification at different time scales (daily, spring–neap cycle and seasonal) are described. In the mixed tidal regime with semi-diurnal dominance, stratification at higher low water succeeding lower high water is more intense than that at lower low water...

  10. Macro-Micro Linkages and the Role of Mechanisms in Social Stratification Research

    Czech Academy of Sciences Publication Activity Database

    Veselý, Arnošt; Smith, Michael

    2008-01-01

    Roč. 44, č. 3 (2008), s. 491-509 ISSN 0038-0288 R&D Projects: GA ČR GA403/08/0109; GA MPS(CZ) 1J/005/04-DP2 Institutional research plan: CEZ:AV0Z70280505 Keywords : Social stratification research * stratification processes * social mechanisms Subject RIV: AO - Sociology, Demography Impact factor: 0.427, year: 2008 http://dlib.lib.cas.cz/3508/

  11. Numerical solution of chemically reactive non-Newtonian fluid flow: Dual stratification

    Science.gov (United States)

    Rehman, Khalil Ur; Malik, M. Y.; Khan, Abid Ali; Zehra, Iffat; Zahri, Mostafa; Tahir, M.

    2017-12-01

    We have found that only a few attempts are available in the literature relatively to the tangent hyperbolic fluid flow induced by stretching cylindrical surfaces. In particular, temperature and concentration stratification effects have not been investigated until now with respect to the tangent hyperbolic fluid model. Therefore, we have considered the tangent hyperbolic fluid flow induced by an acutely inclined cylindrical surface in the presence of both temperature and concentration stratification effects. To be more specific, the fluid flow is attained with the no slip condition, which implies that the bulk motion of the fluid particles is the same as the stretching velocity of a cylindrical surface. Additionally, the flow field situation is manifested with heat generation, mixed convection and chemical reaction effects. The flow partial differential equations give a complete description of the present problem. Therefore, to trace out the solution, a set of suitable transformations is introduced to convert these equations into ordinary differential equations. In addition, a self-coded computational algorithm is executed to inspect the numerical solution of these reduced equations. The effect logs of the involved parameters are provided graphically. Furthermore, the variations of the physical quantities are examined and given with the aid of tables. It is observed that the fluid temperature is a decreasing function of the thermal stratification parameter and a similar trend is noticed for the concentration via the solutal stratification parameter.

  12. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  13. Potential Impacts of Offshore Wind Farms on North Sea Stratification

    Science.gov (United States)

    Carpenter, Jeffrey R.; Merckelbach, Lucas; Callies, Ulrich; Clark, Suzanna; Gaslikova, Lidia; Baschek, Burkard

    2016-01-01

    Advances in offshore wind farm (OWF) technology have recently led to their construction in coastal waters that are deep enough to be seasonally stratified. As tidal currents move past the OWF foundation structures they generate a turbulent wake that will contribute to a mixing of the stratified water column. In this study we show that the mixing generated in this way may have a significant impact on the large-scale stratification of the German Bight region of the North Sea. This region is chosen as the focus of this study since the planning of OWFs is particularly widespread. Using a combination of idealised modelling and in situ measurements, we provide order-of-magnitude estimates of two important time scales that are key to understanding the impacts of OWFs: (i) a mixing time scale, describing how long a complete mixing of the stratification takes, and (ii) an advective time scale, quantifying for how long a water parcel is expected to undergo enhanced wind farm mixing. The results are especially sensitive to both the drag coefficient and type of foundation structure, as well as the evolution of the pycnocline under enhanced mixing conditions—both of which are not well known. With these limitations in mind, the results show that OWFs could impact the large-scale stratification, but only when they occupy extensive shelf regions. They are expected to have very little impact on large-scale stratification at the current capacity in the North Sea, but the impact could be significant in future large-scale development scenarios. PMID:27513754

  14. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  15. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  16. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  17. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  18. Univariate/multivariate genome-wide association scans using data from families and unrelated samples.

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2009-08-01

    Full Text Available As genome-wide association studies (GWAS are becoming more popular, two approaches, among others, could be considered in order to improve statistical power for identifying genes contributing subtle to moderate effects to human diseases. The first approach is to increase sample size, which could be achieved by combining both unrelated and familial subjects together. The second approach is to jointly analyze multiple correlated traits. In this study, by extending generalized estimating equations (GEEs, we propose a simple approach for performing univariate or multivariate association tests for the combined data of unrelated subjects and nuclear families. In particular, we correct for population stratification by integrating principal component analysis and transmission disequilibrium test strategies. The proposed method allows for multiple siblings as well as missing parental information. Simulation studies show that the proposed test has improved power compared to two popular methods, EIGENSTRAT and FBAT, by analyzing the combined data, while correcting for population stratification. In addition, joint analysis of bivariate traits has improved power over univariate analysis when pleiotropic effects are present. Application to the Genetic Analysis Workshop 16 (GAW16 data sets attests to the feasibility and applicability of the proposed method.

  19. Investigation of the Cross-Section Stratifications of Icons Using Micro-Raman and Micro-Fourier Transform Infrared (FT-IR) Spectroscopy.

    Science.gov (United States)

    Lazidou, Dimitra; Lampakis, Dimitrios; Karapanagiotis, Ioannis; Panayiotou, Costas

    2018-01-01

    The cross-section stratifications of samples, which were removed from six icons, are studied using optical microscopy, micro-Raman spectroscopy, and micro-Fourier transform infrared (FT-IR) spectroscopy. The icons, dated from the 14th to 19th centuries, are prominent examples of Byzantine painting art and are attributed to different artistic workshops of ​​northern Greece. The following materials are identified in the cross-sections of the icon samples using micro-Raman spectroscopy: anhydrite; calcite; carbon black; chrome yellow; cinnabar; gypsum; lead white; minium; orpiment; Prussian blue; red ochre; yellow ochre; and a paint of organic origin which can be either indigo ( Indigofera tinctoria L. and others) or woad ( Isatis tinctoria L.). The same samples are investigated using micro-FT-IR which leads to the following identifications: calcite; calcium oxalates; chrome yellow; gypsum; kaolinite; lead carboxylates; lead sulfate (or quartz); lead white; oil; protein; Prussian blue; saponified oil; shellac; silica; and tree resin. The study of the cross-sections of the icon samples reveals the combinations of the aforementioned inorganic and organic materials. Although the icons span over a long period of six centuries, the same stratification comprising gypsum ground layer, paint layers prepared by modified "egg tempera" techniques (proteinaceous materials mixed with oil and resins), and varnish layer is revealed in the investigated samples. Moreover, the presence of three layers of varnishes, one at the top and other two as intermediate layers, in the cross-section analysis of a sample from Virgin and Child provide evidence of later interventions.

  20. Combustion stratification study of partially premixed combustion using Fourier transform analysis of OH* chemiluminescence images

    KAUST Repository

    Izadi Najafabadi, Mohammad; Somers, Bart; Johansson, Bengt; Dam, Nico

    2017-01-01

    A relatively high level of stratification (qualitatively: lack of homogeneity) is one of the main advantages of partially premixed combustion over the homogeneous charge compression ignition concept. Stratification can smooth the heat release rate

  1. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  2. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  3. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  4. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  5. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  7. Size-dependent photoacclimation of the phytoplankton community in temperate shelf waters (southern Bay of Biscay)

    KAUST Repository

    Á lvarez, E; Moran, Xose Anxelu G.; Ló pez-Urrutia, Á ; Nogueira, E

    2015-01-01

    © Inter-Research 2016. Shelf waters of the Cantabrian Sea (southern Bay of Biscay) are productive ecosystems with a marked seasonality. We present the results from 1 yr of monthly monitoring of the phytoplankton community together with an intensive sampling carried out in 2 contrasting scenarios during the summer and autumn in a mid-shelf area. Stratification was apparent on the shelf in summer, while the water column was comparatively well mixed in autumn. The size structure of the photoautotrophic community, from pico-to micro-phytoplankton, was tightly coupled with the meteo-climatic and hydrographical conditions. Over the short term, variations in the size structure and chlorophyll content of phytoplankton cells were related to changes in the physico-chemical environment, through changes in the availability of nutrients and light. Uncoupling between the dynamics of carbon biomass and chlorophyll resulted in chlorophyll to carbon ratios dependent on body size. The slope of the size dependence of chlorophyll content increased with increasing irradiance, reflecting different photoacclimation plasticity from pico-to micro-phytoplankton. The results have important implications for the productivity and the fate of biogenic carbon in this region, since the size dependence of photosynthetic rates is directly related to the size scaling of chlorophyll content.

  8. Size-dependent photoacclimation of the phytoplankton community in temperate shelf waters (southern Bay of Biscay)

    KAUST Repository

    Álvarez, E

    2015-12-09

    © Inter-Research 2016. Shelf waters of the Cantabrian Sea (southern Bay of Biscay) are productive ecosystems with a marked seasonality. We present the results from 1 yr of monthly monitoring of the phytoplankton community together with an intensive sampling carried out in 2 contrasting scenarios during the summer and autumn in a mid-shelf area. Stratification was apparent on the shelf in summer, while the water column was comparatively well mixed in autumn. The size structure of the photoautotrophic community, from pico-to micro-phytoplankton, was tightly coupled with the meteo-climatic and hydrographical conditions. Over the short term, variations in the size structure and chlorophyll content of phytoplankton cells were related to changes in the physico-chemical environment, through changes in the availability of nutrients and light. Uncoupling between the dynamics of carbon biomass and chlorophyll resulted in chlorophyll to carbon ratios dependent on body size. The slope of the size dependence of chlorophyll content increased with increasing irradiance, reflecting different photoacclimation plasticity from pico-to micro-phytoplankton. The results have important implications for the productivity and the fate of biogenic carbon in this region, since the size dependence of photosynthetic rates is directly related to the size scaling of chlorophyll content.

  9. Stratification and salt-wedge in the Seomjin river estuary under the idealized tidal influence

    Science.gov (United States)

    Hwang, Jin Hwan; Jang, Dongmin; Kim, Yong Hoon

    2017-12-01

    Advection, straining, and vertical mixing play primary roles in the process of estuarine stratification. Estuaries can be classified as salt-wedge, partially-mixed or well-mixed depending on the vertical density structure determined by the balancing of advection, mixing and straining. In particular, straining plays a major role in the stratification of the estuarine water body along the estuarine channel. Also, the behavior of a salt wedge with a halocline shape in a stratified channel can be controlled by the competition between straining and mixing induced by buoyancy from the riverine source and tidal forcing. The present study uses Finite Volume Coastal Ocean Model (FVCOM) to show that straining and vertical mixing play major roles in controlling along-channel flow and stratification structures in the Seomjin river estuary (SRE) under idealized conditions. The Potential Energy Anomaly (PEA) dynamic equation quantifies the governing processes thereby enabling the determination of the stratification type. By comparing terms in the equation, we examined how the relative strengths of straining and mixing alter the stratification types in the SRE due to changes in river discharge and the depth resulting from dredging activities. SRE under idealized tidal forcing tends to be partially-mixed based on an analysis of the balance between terms and the vertical structure of salinity, and the morphological and hydrological change in SRE results in the shift of stratification type. While the depth affects the mixing, the freshwater discharge mainly controls the straining, and the balance between mixing and straining determines the final state of the stratification in an estuarine channel. As a result, the development and location of a salt wedge along the channel in a partially mixed and highly stratified condition is also determined by the ratio of straining to mixing. Finally, our findings confirm that the contributions of mixing and straining can be assessed by using the

  10. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  11. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  12. Schematic Harder–Narasimhan stratification for families of principal ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 124; Issue 3. Schematic Harder–Narasimhan Stratification for Families of Principal Bundles ... Author Affiliations. Sudarshan Gurjar1 Nitin Nitsure1. School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  13. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  14. Combustion Mode Design with High Efficiency and Low Emissions Controlled by Mixtures Stratification and Fuel Reactivity

    Directory of Open Access Journals (Sweden)

    Hu eWang

    2015-08-01

    Full Text Available This paper presents a review on the combustion mode design with high efficiency and low emissions controlled by fuel reactivity and mixture stratification that have been conducted in the authors’ group, including the charge reactivity controlled homogeneous charge compression ignition (HCCI combustion, stratification controlled premixed charge compression ignition (PCCI combustion, and dual-fuel combustion concepts controlled by both fuel reactivity and mixture stratification. The review starts with the charge reactivity controlled HCCI combustion, and the works on HCCI fuelled with both high cetane number fuels, such as DME and n-heptane, and high octane number fuels, such as methanol, natural gas, gasoline and mixtures of gasoline/alcohols, are reviewed and discussed. Since single fuel cannot meet the reactivity requirements under different loads to control the combustion process, the studies related to concentration stratification and dual-fuel charge reactivity controlled HCCI combustion are then presented, which have been shown to have the potential to achieve effective combustion control. The efforts of using both mixture and thermal stratifications to achieve the auto-ignition and combustion control are also discussed. Thereafter, both charge reactivity and mixture stratification are then applied to control the combustion process. The potential and capability of thermal-atmosphere controlled compound combustion mode and dual-fuel reactivity controlled compression ignition (RCCI/highly premixed charge combustion (HPCC mode to achieve clean and high efficiency combustion are then presented and discussed. Based on these results and discussions, combustion mode design with high efficiency and low emissions controlled by fuel reactivity and mixtures stratification in the whole operating range is proposed.

  15. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  16. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  17. Vertical Stratification Engineering for Organic Bulk-Heterojunction Devices.

    Science.gov (United States)

    Huang, Liqiang; Wang, Gang; Zhou, Weihua; Fu, Boyi; Cheng, Xiaofang; Zhang, Lifu; Yuan, Zhibo; Xiong, Sixing; Zhang, Lin; Xie, Yuanpeng; Zhang, Andong; Zhang, Youdi; Ma, Wei; Li, Weiwei; Zhou, Yinhua; Reichmanis, Elsa; Chen, Yiwang

    2018-05-22

    High-efficiency organic solar cells (OSCs) can be produced through optimization of component molecular design, coupled with interfacial engineering and control of active layer morphology. However, vertical stratification of the bulk-heterojunction (BHJ), a spontaneous activity that occurs during the drying process, remains an intricate problem yet to be solved. Routes toward regulating the vertical separation profile and evaluating the effects on the final device should be explored to further enhance the performance of OSCs. Herein, we establish a connection between the material surface energy, absorption, and vertical stratification, which can then be linked to photovoltaic conversion characteristics. Through assessing the performance of temporary, artificial vertically stratified layers created by the sequential casting of the individual components to form a multilayered structure, optimal vertical stratification can be achieved. Adjusting the surface energy offset between the substrate results in donor and acceptor stabilization of that stratified layer. Further, a trade-off between the photocurrent generated in the visible region and the amount of donor or acceptor in close proximity to the electrode was observed. Modification of the substrate surface energy was achieved using self-assembled small molecules (SASM), which, in turn, directly impacted the polymer donor to acceptor ratio at the interface. Using three different donor polymers in conjunction with two alternative acceptors in an inverted organic solar cell architecture, the concentration of polymer donor molecules at the ITO (indium tin oxide)/BHJ interface could be increased relative to the acceptor. Appropriate selection of SASM facilitated a synchronized enhancement in external quantum efficiency and power conversion efficiencies over 10.5%.

  18. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  19. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  20. PPOOLEX experiments on stratification and mixing in the wet well pool

    International Nuclear Information System (INIS)

    Laine, J.; Puustinen, M.; Raesaenen, A.; Tanskanen, V.

    2011-03-01

    This report summarizes the results of the thermal stratification and mixing experiments carried out in 2010 with the scaled down, two compartment PPOOLEX test facility designed and constructed at LUT. Steam was blown into the thermally insulated dry well compartment and from there through the DN200 vertical blowdown pipe to the condensation pool filled with sub-cooled water. The main purpose of the experiment series was to generate verification data for evaluating the capability of GOTHIC and APROS codes to predict stratification and mixing phenomena. Another objective was to test the sound velocity measurement system. Altogether five experiments were carried out. The experiments consisted of a small steam flow rate stratification period and of a mixing period with continuously or stepwise increasing flow rate. The dry well structures were heated up to the level of approximately 90 deg. C before the actual experiments. The initial water bulk temperature was 20 deg. C. When the steam flow rate was low enough (typically ∼100-150 g/s) temperatures below the blowdown pipe outlet remained constant while increasing heat-up occurred towards the pool surface layers indicating strong thermal stratification of the wet well pool water. During the stratification period the highest measured temperature difference between pool bottom and surface was approximately 40 deg. C. During the mixing period total mixing of the pool volume was not achieved in any of the experiments. The bottom layers heated up significantly but never reached the same temperature as the topmost layers. The lowest measured temperature difference between the pool bottom and surface was 7-8 deg. C. According to the test results, it seems that a small void fraction doesn't have an effect on the speed of sound in water and that the acquired sound velocity measurement system cannot be used for the estimation of void fraction in the wet well water pool. However, more tests on this issue have to be executed

  1. PPOOLEX experiments on stratification and mixing in the wet well pool

    Energy Technology Data Exchange (ETDEWEB)

    Laine, J.; Puustinen, M.; Raesaenen, A.; Tanskanen, V. (Lappeenranta Univ. of Technology, Nuclear Safety Research Unit (Finland))

    2011-03-15

    This report summarizes the results of the thermal stratification and mixing experiments carried out in 2010 with the scaled down, two compartment PPOOLEX test facility designed and constructed at LUT. Steam was blown into the thermally insulated dry well compartment and from there through the DN200 vertical blowdown pipe to the condensation pool filled with sub-cooled water. The main purpose of the experiment series was to generate verification data for evaluating the capability of GOTHIC and APROS codes to predict stratification and mixing phenomena. Another objective was to test the sound velocity measurement system. Altogether five experiments were carried out. The experiments consisted of a small steam flow rate stratification period and of a mixing period with continuously or stepwise increasing flow rate. The dry well structures were heated up to the level of approximately 90 deg. C before the actual experiments. The initial water bulk temperature was 20 deg. C. When the steam flow rate was low enough (typically approx100-150 g/s) temperatures below the blowdown pipe outlet remained constant while increasing heat-up occurred towards the pool surface layers indicating strong thermal stratification of the wet well pool water. During the stratification period the highest measured temperature difference between pool bottom and surface was approximately 40 deg. C. During the mixing period total mixing of the pool volume was not achieved in any of the experiments. The bottom layers heated up significantly but never reached the same temperature as the topmost layers. The lowest measured temperature difference between the pool bottom and surface was 7-8 deg. C. According to the test results, it seems that a small void fraction doesn't have an effect on the speed of sound in water and that the acquired sound velocity measurement system cannot be used for the estimation of void fraction in the wet well water pool. However, more tests on this issue have to be

  2. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  3. Proceedings of the specialists meeting on experience with thermal fatigue in LWR piping caused by mixing and stratification

    International Nuclear Information System (INIS)

    1998-01-01

    This specialists meeting on experience with thermal fatigue in LWR piping caused by mixing and stratification, was held in June 1998 in Paris. It included five sessions. Session 1: operating experience (7 papers): Historical perspective; EDF experience with local thermohydraulic phenomena in PWRs: impacts and strategies; Thermal fatigue in safety injection lines of French PWRs: technical problems, regulatory requirements, concerns about other areas; US NRC Regulatory perspective on unanticipated thermal fatigue in LWR piping; Failure to the Residual Heat Removal system suction line pipe in Genkai unit 1 caused by thermal stratification cycling; Emergency Core Cooling System pipe crack incident at Tihange unit 1; Two leakages induced by thermal stratification at the Loviisa power plant). Session 2: thermal hydraulic phenomena (5 papers): Thermal stratification in small pipes with respect to fatigue effects and so called 'Banana effect'; Thermal stratification in the surge line of the Korean next generation reactor; Thermal stratification in horizontal pipes investigated in UPTF-TRAM and HDR facilities; Research on thermal stratification in un-isolable piping of reactor pressure boundary; Thermal mixing phenomena in piping systems: 3D numerical simulation and design considerations. Session 3: response of material and structure (5 papers): Fatigue induced by thermal stratification, Results of tests and calculations of the COUFAST model; Laboratory simulation of thermal fatigue cracking as a basis for verifying life models; Thermo-mechanical analysis methods for the conception and the follow up of components submitted to thermal stratification transients; Piping analysis methods of a PWR surge line for stratified flow; The thermal stratification effect on surge lines, The VVER estimation. Session 4: monitoring aspects (4 papers): Determination of the thermal loadings affecting the auxiliary lines of the reactor coolant system in French PWR plants; Expected and

  4. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  5. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  6. Water Stratification Raster Images for the Gulf of Maine

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This geodatabase contains seasonal water stratification raster images for the Gulf of Maine. They were created by interpolating water density (sigma t) values at 0...

  7. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  8. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  9. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  10. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  11. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  12. Statistical Support for Analysis of the Social Stratification and Economic Inequality of the Country’s Population

    Directory of Open Access Journals (Sweden)

    Aksyonova Irina V.

    2017-12-01

    Full Text Available The aim of the article is to summarize the theoretical and methodological as well as information and analytical support for statistical research of economic and social stratification in society and conduct an analysis of the differentiation of the population of Ukraine in terms of the economic component of social inequality. The theoretical and methodological level of the research is studied, and criteria for social stratification and inequalities in society, systems, models and theories of social stratification of the population are singled out. The indicators of social and economic statistics regarding the differentiation of the population by income level are considered as the research tools. As a result of the analysis it was concluded that the economic inequality of the population leads to changes in the social structure, which requires formation of a new social stratification of society. The basis of social stratification is indicators of the population well-being, which require a comprehensive study. Prospects for further research in this area are the analysis of the components of economic inequality that determine and influence the social stratification of the population of the country, the formation of the middle class, and the study of the components of the human development index as a cross-currency indicator of the socio-economic inequality of the population.

  13. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  14. Tidal asymmetries of velocity and stratification over a bathymetric depression in a tropical inlet

    Science.gov (United States)

    Waterhouse, Amy F.; Valle-Levinson, Arnoldo; Morales Pérez, Rubén A.

    2012-10-01

    Observations of current velocity, sea surface elevation and vertical profiles of density were obtained in a tropical inlet to determine the effect of a bathymetric depression (hollow) on the tidal flows. Surveys measuring velocity profiles were conducted over a diurnal tidal cycle with mixed spring tides during dry and wet seasons. Depth-averaged tidal velocities during ebb and flood tides behaved according to Bernoulli dynamics, as expected. The dynamic balance of depth-averaged quantities in the along-channel direction was governed by along-channel advection and pressure gradients with baroclinic pressure gradients only being important during the wet season. The vertical structure of the along-channel flow during flood tides exhibited a mid-depth maximum with lateral shear enhanced during the dry season as a result of decreased vertical stratification. During ebb tides, along-channel velocities in the vicinity of the hollow were vertically sheared with a weak return flow at depth due to choking of the flow on the seaward slope of the hollow. The potential energy anomaly, a measure of the amount of energy required to fully mix the water column, showed two peaks in stratification associated with ebb tide and a third peak occurring at the beginning of flood. After the first mid-ebb peak in stratification, ebb flows were constricted on the seaward slope of the hollow resulting in a bottom return flow. The sinking of surface waters and enhanced mixing on the seaward slope of the hollow reduced the potential energy anomaly after maximum ebb. The third peak in stratification during early flood occurred as a result of denser water entering the inlet at mid-depth. This dense water mixed with ambient deep waters increasing the stratification. Lateral shear in the along-channel flow across the hollow allowed trapping of less dense water in the surface layers further increasing stratification.

  15. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  16. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  17. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Combustion Stratification for Naphtha from CI Combustion to PPC

    KAUST Repository

    Vallinayagam, R.; Vedharaj, S.; An, Yanzhao; Dawood, Alaaeldin; Izadi Najafabadi, Mohammad; Somers, Bart; Johansson, Bengt

    2017-01-01

    This study demonstrates the combustion stratification from conventional compression ignition (CI) combustion to partially premixed combustion (PPC). Experiments are performed in an optical CI engine at a speed of 1200 rpm for diesel and naphtha (RON

  19. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  20. A Practical Risk Stratification Approach for Implementing a Primary Care Chronic Disease Management Program in an Underserved Community.

    Science.gov (United States)

    Xu, Junjun; Williams-Livingston, Arletha; Gaglioti, Anne; McAllister, Calvin; Rust, George

    2018-01-01

    The use of value metrics is often dependent on payer-initiated health care management incentives. There is a need for practices to define and manage their own patient panels regardless of payer to participate effectively in population health management. A key step is to define a panel of primary care patients with high comorbidity profiles. Our sample included all patients seen in an urban academic family medicine clinic over a two-year period. The simplified risk stratification was built using internal electronic health record and billing system data based on ICD-9 codes. There were 347 patients classified as high-risk out of the 5,364 patient panel. Average age was 59 years (SD 15). Hypertension (90%), hyperlipidemia (62%), and depression (55%) were the most common conditions among high-risk patients. Simplified risk stratification provides a feasible option for our team to understand and respond to the nuances of population health in our underserved community.

  1. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  3. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  4. Ion species stratification within strong shocks in two-ion plasmas

    Science.gov (United States)

    Keenan, Brett D.; Simakov, Andrei N.; Taitano, William T.; Chacón, Luis

    2018-03-01

    Strong collisional shocks in multi-ion plasmas are featured in many environments, with Inertial Confinement Fusion (ICF) experiments being one prominent example. Recent work [Keenan et al., Phys. Rev. E 96, 053203 (2017)] answered in detail a number of outstanding questions concerning the kinetic structure of steady-state, planar plasma shocks, e.g., the shock width scaling by the Mach number, M. However, it did not discuss shock-driven ion-species stratification (e.g., relative concentration modification and temperature separation). These are important effects since many recent ICF experiments have evaded explanation by standard, single-fluid, radiation-hydrodynamic (rad-hydro) numerical simulations, and shock-driven fuel stratification likely contributes to this discrepancy. Employing the state-of-the-art Vlasov-Fokker-Planck code, iFP, along with multi-ion hydro simulations and semi-analytics, we quantify the ion stratification by planar shocks with the arbitrary Mach number and the relative species concentration for two-ion plasmas in terms of ion mass and charge ratios. In particular, for strong shocks, we find that the structure of the ion temperature separation has a nearly universal character across ion mass and charge ratios. Additionally, we find that the shock fronts are enriched with the lighter ion species and the enrichment scales as M4 for M ≫ 1.

  5. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  6. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  7. The prostate cancer risk stratification (ProCaRS) project: Recursive partitioning risk stratification analysis

    International Nuclear Information System (INIS)

    Rodrigues, George; Lukka, Himu; Warde, Padraig; Brundage, Michael; Souhami, Luis; Crook, Juanita; Cury, Fabio; Catton, Charles; Mok, Gary; Martin, Andre-Guy; Vigneault, Eric; Morris, Jim; Warner, Andrew; Gonzalez Maldonado, Sandra; Pickles, Tom

    2013-01-01

    Background: The Genitourinary Radiation Oncologists of Canada (GUROC) published a three-group risk stratification (RS) system to assist prostate cancer decision-making in 2001. The objective of this project is to use the ProCaRS database to statistically model the predictive accuracy and clinical utility of a proposed new multi-group RS schema. Methods: The RS analyses utilized the ProCaRS database that consists of 7974 patients from four Canadian institutions. Recursive partitioning analysis (RPA) was utilized to explore the sub-stratification of groups defined by the existing three-group GUROC scheme. 10-fold cross-validated C-indices and the Net Reclassification Index were both used to assess multivariable models and compare the predictive accuracy of existing and proposed RS systems, respectively. Results: The recursive partitioning analysis has suggested that the existing GUROC classification system could be altered to accommodate as many as six separate and statistical unique groups based on differences in BFFS (C-index 0.67 and AUC 0.70). GUROC low-risk patients would be divided into new favorable-low and low-risk groups based on PSA ⩽6 and PSA >6. GUROC intermediate-risk patients can be subclassified into low-intermediate and high-intermediate groups. GUROC high-intermediate-risk is defined as existing GUROC intermediate-risk with PSA >=10 AND either T2b/c disease or T1T2a disease with Gleason 7. GUROC high-risk patients would be subclassified into an additional extreme-risk group (GUROC high-risk AND (positive cores ⩾87.5% OR PSA >30). Conclusions: Proposed RS subcategories have been identified by a RPA of the ProCaRS database

  8. A method to determine stratification efficiency of thermal energy storage processes independently from storage heat losses

    DEFF Research Database (Denmark)

    Haller, M.Y.; Yazdanshenas, Eshagh; Andersen, Elsa

    2010-01-01

    process is in agreement with the first law of thermodynamics. A comparison of the stratification efficiencies obtained from experimental results of charging, standby, and discharging processes gives meaningful insights into the different mixing behaviors of a storage tank that is charged and discharged......A new method for the calculation of a stratification efficiency of thermal energy storages based on the second law of thermodynamics is presented. The biasing influence of heat losses is studied theoretically and experimentally. Theoretically, it does not make a difference if the stratification...

  9. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  10. Models of the plasma corona formation and stratification of exploding micro-wires

    International Nuclear Information System (INIS)

    Volkov, N.B.; Sarkisov, G.S.; Struve, K.W.; McDaniel, D.H.

    2005-01-01

    There are proposed the models pf plasma corona formation and stratification of a gas-plasma core of exploding micro-wire. The opportunity of use for the description of physical processes in a formed plasma corona of an electronic magnetohydrodynamics is generalized in view of change of particle number as a result of evaporation, ionization and a leaving of electrons on a wire surface. Necessity of the account of influence of a hot plasma corona on stratification of a gas-plasma core was grounded [ru

  11. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  12. Seed flotation and germination of salt marsh plants: The effects of stratification, salinity, and/or inundation regime

    Science.gov (United States)

    Elsey-Quirk, T.; Middleton, B.A.; Proffitt, C.E.

    2009-01-01

    We examined the effects of cold stratification and salinity on seed flotation of eight salt marsh species. Four of the eight species were tested for germination success under different stratification, salinity, and flooding conditions. Species were separated into two groups, four species received wet stratification and four dry stratification and fresh seeds of all species were tested for flotation and germination. Fresh seeds of seven out of eight species had flotation times independent of salinity, six of which had average flotation times of at least 50 d. Seeds of Spartina alterniflora and Spartina patens had the shortest flotation times, averaging 24 and 26 d, respectively. Following wet stratification, the flotation time of S. alterniflora seeds in higher salinity water (15 and 36 ppt) was reduced by over 75% and germination declined by more than 90%. Wet stratification reduced the flotation time of Distichlis spicata seeds in fresh water but increased seed germination from 2 to 16% in a fluctuating inundation regime. Fresh seeds of Iva frutescens and S. alternflora were capable of germination and therefore are non-dormant during dispersal. Fresh seeds of I. frutescens had similar germination to dry stratified seeds ranging 25-30%. Salinity reduced seed germination for all species except for S. alterniflora. A fluctuating inundation regime was important for seed germination of the low marsh species and for germination following cold stratification. The conditions that resulted in seeds sinking faster were similar to the conditions that resulted in higher germination for two of four species. ?? 2009 Elsevier B.V.

  13. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  14. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  15. Developing a PTEN-ERG Signature to Improve Molecular Risk Stratification in Prostate Cancer

    Science.gov (United States)

    2017-10-01

    AWARD NUMBER: W81XWH-16-1-0737 TITLE: Developing a PTEN-ERG Signature to Improve Molecular Risk Stratification in Prostate Cancer PRINCIPAL...AND SUBTITLE 5a. CONTRACT NUMBER Developing a PTEN-ERG Signature to Improve Molecular Risk Stratification in Prostate Cancer 5b. GRANT NUMBER W81XWH...that there exist distinctive molecular correlates of PTEN loss in the context of ETS-negative versus ETS-positive human prostate cancers and that

  16. A three-gene expression signature model for risk stratification of patients with neuroblastoma.

    Science.gov (United States)

    Garcia, Idoia; Mayol, Gemma; Ríos, José; Domenech, Gema; Cheung, Nai-Kong V; Oberthuer, André; Fischer, Matthias; Maris, John M; Brodeur, Garrett M; Hero, Barbara; Rodríguez, Eva; Suñol, Mariona; Galvan, Patricia; de Torres, Carmen; Mora, Jaume; Lavarino, Cinzia

    2012-04-01

    Neuroblastoma is an embryonal tumor with contrasting clinical courses. Despite elaborate stratification strategies, precise clinical risk assessment still remains a challenge. The purpose of this study was to develop a PCR-based predictor model to improve clinical risk assessment of patients with neuroblastoma. The model was developed using real-time PCR gene expression data from 96 samples and tested on separate expression data sets obtained from real-time PCR and microarray studies comprising 362 patients. On the basis of our prior study of differentially expressed genes in favorable and unfavorable neuroblastoma subgroups, we identified three genes, CHD5, PAFAH1B1, and NME1, strongly associated with patient outcome. The expression pattern of these genes was used to develop a PCR-based single-score predictor model. The model discriminated patients into two groups with significantly different clinical outcome [set 1: 5-year overall survival (OS): 0.93 ± 0.03 vs. 0.53 ± 0.06, 5-year event-free survival (EFS): 0.85 ± 0.04 vs. 0.042 ± 0.06, both P model was an independent marker for survival (P model robustly classified patients in the total cohort and in different clinically relevant risk subgroups. We propose for the first time in neuroblastoma, a technically simple PCR-based predictor model that could help refine current risk stratification systems. ©2012 AACR.

  17. Variability of mesozooplankton biomass and individual size in a coast-offshore transect in the Catalan Sea: relationships with chlorophyll a and hydrographic features

    KAUST Repository

    Alcaraz, Miquel; Calbet, Albert; Isari, Stamatina; Irigoien, Xabier; Trepat, Isabel; Saiz, Enric

    2016-01-01

    The temporal and spatial changes of zooplankton and chlorophyll a concentration were studied during the warm stratification period (early June) at three stations whose traits corresponded to the coastal, frontal, and offshore-dome water conditions described for the Catalan Sea. We sampled the stations for 12 days at a frequency ranging from less than 10 to 102 h, with a spatial resolution ranging from 10 to 104 m. The objective was to determine the variability of mesozooplankton and phytoplankton (chlorophyll a) biomass, and average individual size (mass) across a coast-offshore transect in relation to the stratification conditions prevailing in the NW Mediterranean during summer. The vertical distribution of phytoplankton biomass displayed a clear deep maximum at 60 m depth except at the coastal station. This maximum exists during most of the year and is especially important during the density stratification period. It was accompanied during daylight hours by a coherent zooplankton maximum. At sunset mesozooplankton ascended and dispersed, with larger organisms from deeper layers joining the migrating community and increasing the average individual mass. The highest variability of mesozooplankton biomass, individual mass and chlorophyll a concentration occurred at the front station due to the coupling between the vertical migration of zooplankton and the particular characteristics of the front. According to the data shown, the highest variability was observed at the lowest scales.

  18. Variability of mesozooplankton biomass and individual size in a coast-offshore transect in the Catalan Sea: relationships with chlorophyll a and hydrographic features

    KAUST Repository

    Alcaraz, Miquel

    2016-10-11

    The temporal and spatial changes of zooplankton and chlorophyll a concentration were studied during the warm stratification period (early June) at three stations whose traits corresponded to the coastal, frontal, and offshore-dome water conditions described for the Catalan Sea. We sampled the stations for 12 days at a frequency ranging from less than 10 to 102 h, with a spatial resolution ranging from 10 to 104 m. The objective was to determine the variability of mesozooplankton and phytoplankton (chlorophyll a) biomass, and average individual size (mass) across a coast-offshore transect in relation to the stratification conditions prevailing in the NW Mediterranean during summer. The vertical distribution of phytoplankton biomass displayed a clear deep maximum at 60 m depth except at the coastal station. This maximum exists during most of the year and is especially important during the density stratification period. It was accompanied during daylight hours by a coherent zooplankton maximum. At sunset mesozooplankton ascended and dispersed, with larger organisms from deeper layers joining the migrating community and increasing the average individual mass. The highest variability of mesozooplankton biomass, individual mass and chlorophyll a concentration occurred at the front station due to the coupling between the vertical migration of zooplankton and the particular characteristics of the front. According to the data shown, the highest variability was observed at the lowest scales.

  19. Mixed convection and stratification phenomena in a heavy liquid metal pool

    Energy Technology Data Exchange (ETDEWEB)

    Tarantino, Mariano, E-mail: mariano.tarantino@enea.it [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone (Italy); Martelli, Daniele; Barone, Gianluca [Dipartimento di Ingegneria Civile e Industriale, University of Pisa, Largo Lucio Lazzarino, 1-56100 Pisa Italy (Italy); Di Piazza, Ivan [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone (Italy); Forgione, Nicola [Dipartimento di Ingegneria Civile e Industriale, University of Pisa, Largo Lucio Lazzarino, 1-56100 Pisa Italy (Italy)

    2015-05-15

    Highlights: • Results related to experiments reproducing PLOHS + LOF accident in CIRCE pool facility. • Vertical thermal stratification in large HLM pool. • Transition from forced to natural circulation in HLM pool under DHR conditions. • Heat transfer coefficient measurement in HLM pin bundle. • Nusselt numbers calculations and comparison with correlations. - Abstract: This work deals with an analysis of the first experimental series of tests performed to investigate mixed convection and stratification phenomena in CIRCE HLM large pool. In particular, the tests concern the transition from nominal flow to natural circulation regime, typical of decay heat removal (DHR) regime. To this purpose the CIRCE pool facility has been updated to host a suitable test section in order to reproduce the thermal-hydraulic behaviour of a HLM pool-type reactor. The test section basically consists of an electrical bundle (FPS) made up of 37 pins arranged in a hexagonal wrapped lattice with a pitch diameter ratio of 1.8. Along the FPS active length, three sections were instrumented to monitor the heat transfer coefficient along the bundle as well as the cladding temperatures at different ranks of the sub-channels. This paper reports the experimental data as well as a preliminary analysis and discussion of the results, focusing on the most relevant tests of the campaign, namely Test I (48 h) and Test II (97 h). Temperatures along three sections of the FPS and at inlet and outlet sections of the main components were reported and the Nusselt number in the FPS sub-channels was investigated together with the void fraction in the riser. Concerning the investigation of in-pool thermal stratification phenomena, the temperatures in the whole LBE pool were monitored at different elevations and radial locations. The analysis of experimental data obtained from Tests I and II underline the occurrence of thermal stratification phenomena in the region placed between the outlet sections of

  20. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    Science.gov (United States)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management

  1. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  2. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  3. Numerical Investigation on Effects of Assigned EGR Stratification on a Heavy Duty Diesel Engine with Two-Stage Fuel Injection

    Directory of Open Access Journals (Sweden)

    Zhaojie Shen

    2018-02-01

    Full Text Available External exhaust gas recirculation (EGR stratification in diesel engines contributes to reduction of toxic emissions. Weak EGR stratification lies in that strong turbulence and mixing between EGR and intake air by current introduction strategies of EGR. For understanding of ideal EGR stratification combustion, EGR was assigned radically at −30 °CA after top dead center (ATDC to organize strong EGR stratification using computational fluid dynamics (CFD. The effects of assigned EGR stratification on diesel performance and emissions are discussed in this paper. Although nitric oxides (NOx and soot emissions are both reduced by means of EGR stratification compared to uniform EGR, the trade-off between NOx and soot still exists under the condition of arranged EGR stratification with different fuel injection strategies. A deterioration of soot emissions was observed when the interval between main and post fuel injection increased, while NO emissions increased first then reduced. The case with a 4 °CA interval between main and post fuel injection is suitable for acceptable NO and soot emissions. Starting the main fuel injection too early and too late is not acceptable, which results in high NO emissions and high soot emissions respectively. The start of the main fuel injection −10 °CA ATDC is suitable.

  4. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Analysis of stratification effects on mechanical integrity of pressurizer surge line

    International Nuclear Information System (INIS)

    Thomas-Solgadi, E.; Taupin, P.; Ensel, C.

    1992-01-01

    Unexpected thermal movements in pressurizer surge lines have been reported by several PWR operating utilities. Sometimes gaps between pipe and pipe whip restraints can become closed and plastic deformations could result. Moreover these movements, which have not been considered at conception, can induce additional stresses, and design limits on fatigue and stresses may be exceeded. These piping movements are caused by thermal stratification phenomenon in the horizontal part of the surge line (difference of temperature between hot leg and pressurizer varying from 30 C to above 160 C). To assess the mechanical consequences of this 3-dimensional phenomenon, FRAMATOME has developed a computer program using simplified models (1 and 2-dimensional). This method integrates past investigations on thermal-hydraulic variation of the stratification based on plant monitoring programs carried out by FRAMATOME since 1981, and based also on thermal-hydraulic tests and thermal-hydraulic computer code results. The methodology developed by FRAMATOME permits the following calculations: movements of the line in the elastic and plastic domains; stresses (Mises criterion -- calculations in compliance with ASME or RCC-M codes); usage factors in different components (elbows, welds, ...); crack propagation taking into account stratification and plastic shakedown

  6. Effects of cold stratification, sulphuric acid, submersion in hot and ...

    African Journals Online (AJOL)

    Effects of cold stratification, sulphuric acid, submersion in hot and tap water pretreatments in the greenhouse and open field conditions on germination of bladder-Senna ( Colutea armena Boiss. and Huet.) seeds.

  7. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  8. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    Science.gov (United States)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  9. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  10. Stratification-mixing cycles and plankton dynamics in a shallow estuary (Limfjord, Denmark)

    DEFF Research Database (Denmark)

    Teixeira, Isabel G.; Crespo, Bibiani G.; Nielsen, Torkel Gissel

    2014-01-01

    The biomass, production and consumption of phytoplankton, bacteria and zooplankton in a shallow Danish estuary (Limfjord) were analysed during a 9-day period.The water column changed between stratified and mixed conditions which influenced the dominant processes in the pelagic system. During strong...... stratification, phytoplankton was mainly controlled by microzooplankton grazing. A mixing event, which homogenized the water column, possibly provided food to a mussel-dominated benthic community. Concomitantly, zooplankton feeding and reproduction decreased.However, the nutrient input to the upper part...... of the water column during mixing and the subsequent stabilization provided the ideal conditions for the recovery of phytoplankton from the loss processes from previous days. Microzooplankton, which was also a significant consumer of bacteria throughout the sampling period, was not the only consumer...

  11. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  12. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  13. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. An efficient modeling method for thermal stratification simulation in a BWR suppression pool

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Ling Zou; Hongbin Zhang; Hua Li; Walter Villanueva; Pavel Kudinov

    2012-09-01

    The suppression pool in a BWR plant not only is the major heat sink within the containment system, but also provides major emergency cooling water for the reactor core. In several accident scenarios, such as LOCA and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; and the pool temperature distribution also affects the NPSHa (Available Net Positive Suction Head) and therefore the performance of the pump which draws cooling water back to the core. Current safety analysis codes use 0-D lumped parameter methods to calculate the energy and mass balance in the pool and therefore have large uncertainty in prediction of scenarios in which stratification and mixing are important. While 3-D CFD methods can be used to analyze realistic 3D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, therefore long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. The POOLEX experiments at Finland, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, are used for validation. GOTHIC lumped parameter models are used to obtain boundary conditions for BMIX++ code and CFD simulations. Comparison between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data is discussed in detail.

  15. The Social Stratification of the German VET System

    Science.gov (United States)

    Protsch, Paula; Solga, Heike

    2016-01-01

    Germany is widely known for its vocational education and training (VET) system and its dual apprenticeship system in particular. What is often overlooked, however, is the vertical stratification within the German VET system. This is the focus of this study. Our analysis shows that the VET system, like the German school system, is highly…

  16. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  17. Three-tiered risk stratification model to predict progression in Barrett's esophagus using epigenetic and clinical features.

    Directory of Open Access Journals (Sweden)

    Fumiaki Sato

    2008-04-01

    Full Text Available Barrett's esophagus predisposes to esophageal adenocarcinoma. However, the value of endoscopic surveillance in Barrett's esophagus has been debated because of the low incidence of esophageal adenocarcinoma in Barrett's esophagus. Moreover, high inter-observer and sampling-dependent variation in the histologic staging of dysplasia make clinical risk assessment problematic. In this study, we developed a 3-tiered risk stratification strategy, based on systematically selected epigenetic and clinical parameters, to improve Barrett's esophagus surveillance efficiency.We defined high-grade dysplasia as endpoint of progression, and Barrett's esophagus progressor patients as Barrett's esophagus patients with either no dysplasia or low-grade dysplasia who later developed high-grade dysplasia or esophageal adenocarcinoma. We analyzed 4 epigenetic and 3 clinical parameters in 118 Barrett's esophagus tissues obtained from 35 progressor and 27 non-progressor Barrett's esophagus patients from Baltimore Veterans Affairs Maryland Health Care Systems and Mayo Clinic. Based on 2-year and 4-year prediction models using linear discriminant analysis (area under the receiver-operator characteristic (ROC curve: 0.8386 and 0.7910, respectively, Barrett's esophagus specimens were stratified into high-risk (HR, intermediate-risk (IR, or low-risk (LR groups. This 3-tiered stratification method retained both the high specificity of the 2-year model and the high sensitivity of the 4-year model. Progression-free survivals differed significantly among the 3 risk groups, with p = 0.0022 (HR vs. IR and p<0.0001 (HR or IR vs. LR. Incremental value analyses demonstrated that the number of methylated genes contributed most influentially to prediction accuracy.This 3-tiered risk stratification strategy has the potential to exert a profound impact on Barrett's esophagus surveillance accuracy and efficiency.

  18. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  19. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  20. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  1. Evaluation and verification of thermal stratification models for was

    African Journals Online (AJOL)

    USER

    prediction of the condition of thermal stratification in WSPs under different hydraulic conditions and ... off coefficient. The models are verified with data collected from the full scale waste .... comparing two mathematical models based ..... 2 Comparison of measured and predicted effluent coliform bacteria (N) againsty depth.

  2. The 1995 Georges Bank Stratification Study and Moored Array Measurements

    National Research Council Canada - National Science Library

    Alessi, C

    2001-01-01

    .... GLOBEC Northwest Atlantic/Georges Bank field program. The GBSS was designed to investigate the physical processes which control the seasonal development of stratification along the southern flank of Georges Bank during spring and summer...

  3. Evolution of South Atlantic density and chemical stratification across the last deglaciation.

    Science.gov (United States)

    Roberts, Jenny; Gottschalk, Julia; Skinner, Luke C; Peck, Victoria L; Kender, Sev; Elderfield, Henry; Waelbroeck, Claire; Vázquez Riveiros, Natalia; Hodell, David A

    2016-01-19

    Explanations of the glacial-interglacial variations in atmospheric pCO2 invoke a significant role for the deep ocean in the storage of CO2. Deep-ocean density stratification has been proposed as a mechanism to promote the storage of CO2 in the deep ocean during glacial times. A wealth of proxy data supports the presence of a "chemical divide" between intermediate and deep water in the glacial Atlantic Ocean, which indirectly points to an increase in deep-ocean density stratification. However, direct observational evidence of changes in the primary controls of ocean density stratification, i.e., temperature and salinity, remain scarce. Here, we use Mg/Ca-derived seawater temperature and salinity estimates determined from temperature-corrected δ(18)O measurements on the benthic foraminifer Uvigerina spp. from deep and intermediate water-depth marine sediment cores to reconstruct the changes in density of sub-Antarctic South Atlantic water masses over the last deglaciation (i.e., 22-2 ka before present). We find that a major breakdown in the physical density stratification significantly lags the breakdown of the deep-intermediate chemical divide, as indicated by the chemical tracers of benthic foraminifer δ(13)C and foraminifer/coral (14)C. Our results indicate that chemical destratification likely resulted in the first rise in atmospheric pCO2, whereas the density destratification of the deep South Atlantic lags the second rise in atmospheric pCO2 during the late deglacial period. Our findings emphasize that the physical and chemical destratification of the ocean are not as tightly coupled as generally assumed.

  4. Numerical analysis on hydrogen stratification and post-inerting of hydrogen risk

    International Nuclear Information System (INIS)

    Peng, Cheng; Tong, Lili; Cao, Xuewu

    2016-01-01

    Highlights: • A three-dimensional computational model was built and the applicability was discussed. • The formation of helium stratification was further studied. • Three influencing factors on the post-inerting of hydrogen risk were analyzed. - Abstract: In the case of severe accidents, the risk of hydrogen explosion threatens the integrity of the nuclear reactor containment. According to nuclear regulations, hydrogen control is required to ensure the safe operation of the nuclear reactor. In this study, the method of Computational Fluid Dynamics (CFD) has been applied to analyze process of hydrogen stratification and the post-inerting of hydrogen risk in the Large-Scale Gas Mixing Facility. A three-dimensional computational model was built and the applicability of different turbulence models was discussed. The result shows that the helium concentration calculated by the standard k–ε turbulence model is closest to the experiment data. Through analyzing the formation of helium stratification at different injection velocities, it is found that when the injection mass flow is constant and the injection velocity of helium increases, the mixture of helium and air is enhanced while there is rarely influence on the formation of helium stratification. In addition, the influences of mass flow rate, injection location and direction and inert gas on the post-inerting of hydrogen risk have been analyzed and the results are as follows: with the increasing of mass flow rate, the mitigation effect of nitrogen on hydrogen risk will be further improved; there is an obvious local difference between the mitigation effects of nitrogen on hydrogen risk in different injection directions and locations; when the inert gas is injected at the same mass flow rate, the mitigation effect of steam on hydrogen risk is better than that of nitrogen. This study can provide technical support for the mitigation of hydrogen risk in the small LWR containment.

  5. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  6. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  7. Fundamental Interactions in Gasoline Compression Ignition Engines with Fuel Stratification

    Science.gov (United States)

    Wolk, Benjamin Matthew

    Transportation accounted for 28% of the total U.S. energy demand in 2011, with 93% of U.S. transportation energy coming from petroleum. The large impact of the transportation sector on global climate change necessitates more-efficient, cleaner-burning internal combustion engine operating strategies. One such strategy that has received substantial research attention in the last decade is Homogeneous Charge Compression Ignition (HCCI). Although the efficiency and emissions benefits of HCCI are well established, practical limits on the operating range of HCCI engines have inhibited their application in consumer vehicles. One such limit is at high load, where the pressure rise rate in the combustion chamber becomes excessively large. Fuel stratification is a potential strategy for reducing the maximum pressure rise rate in HCCI engines. The aim is to introduce reactivity gradients through fuel stratification to promote sequential auto-ignition rather than a bulk-ignition, as in the homogeneous case. A gasoline-fueled compression ignition engine with fuel stratification is termed a Gasoline Compression Ignition (GCI) engine. Although a reasonable amount of experimental research has been performed for fuel stratification in GCI engines, a clear understanding of how the fundamental in-cylinder processes of fuel spray evaporation, mixing, and heat release contribute to the observed phenomena is lacking. Of particular interest is gasoline's pressure sensitive low-temperature chemistry and how it impacts the sequential auto-ignition of the stratified charge. In order to computationally study GCI with fuel stratification using three-dimensional computational fluid dynamics (CFD) and chemical kinetics, two reduced mechanisms have been developed. The reduced mechanisms were developed from a large, detailed mechanism with about 1400 species for a 4-component gasoline surrogate. The two versions of the reduced mechanism developed in this work are: (1) a 96-species version and (2

  8. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  9. Segmented Poincaré plot analysis for risk stratification in patients with dilated cardiomyopathy.

    Science.gov (United States)

    Voss, A; Fischer, C; Schroeder, R; Figulla, H R; Goernig, M

    2010-01-01

    The prognostic value of heart rate variability in patients with dilated cardiomyopathy (DCM) is limited and does not contribute to risk stratification although the dynamics of ventricular repolarization differs considerably between DCM patients and healthy subjects. Neither linear nor nonlinear methods of heart rate variability analysis could discriminate between patients at high and low risk for sudden cardiac death. The aim of this study was to analyze the suitability of the new developed segmented Poincaré plot analysis (SPPA) to enhance risk stratification in DCM. In contrast to the usual applied Poincaré plot analysis the SPPA retains nonlinear features from investigated beat-to-beat interval time series. Main features of SPPA are the rotation of cloud of points and their succeeded variability depended segmentation. Significant row and column probabilities were calculated from the segments and led to discrimination (up to pplot analysis of heart rate variability was able to contribute to risk stratification in patients suffering from DCM.

  10. Multi-Layered Stratification in the Baltic Sea: Insight from a Modeling Study with Reference to Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Bijan Dargahi

    2017-01-01

    Full Text Available The hydrodynamic and transport characteristics of the Baltic Sea in the period 2000–2009 were studied using a fully calibrated and validated 3D hydrodynamic model with a horizontal resolution of 4.8 km. This study provided new insight into the type and dynamics of vertical structure in the Baltic Sea, not considered in previous studies. Thermal and salinity stratification are both addressed, with a focus on the structural properties of the layers. The detection of cooler regions (dicothermal within the layer structure is an important finding. The detailed investigation of thermal stratification for a 10-year period (i.e., 2000–2009 revealed some new features. A multilayered structure that contains several thermocline and dicothermal layers was identified from this study. Statistical analysis of the simulation results made it possible to derive the mean thermal stratification properties, expressed as mean temperatures and the normalized layer thicknesses. The three-layered model proposed by previous investigators appears to be valid only during the winter periods; for other periods, a multi-layered structure with more than five layers has been identified during this investigation. This study provides detailed insight into thermal and salinity stratification in the Baltic Sea during a recent decade that can be used as a basis for diverse environmental assessments. It extends previous studies on stratification in the Baltic Sea regarding both the extent and the nature of stratification.

  11. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  12. In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size

    Directory of Open Access Journals (Sweden)

    Stefano Schiavon

    2010-01-01

    Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.

  13. Authorial and institutional stratification in open access publishing: the case of global health research.

    Science.gov (United States)

    Siler, Kyle; Haustein, Stefanie; Smith, Elise; Larivière, Vincent; Alperin, Juan Pablo

    2018-01-01

    Using a database of recent articles published in the field of Global Health research, we examine institutional sources of stratification in publishing access outcomes. Traditionally, the focus on inequality in scientific publishing has focused on prestige hierarchies in established print journals. This project examines stratification in contemporary publishing with a particular focus on subscription vs. various Open Access (OA) publishing options. Findings show that authors working at lower-ranked universities are more likely to publish in closed/paywalled outlets, and less likely to choose outlets that involve some sort of Article Processing Charge (APCs; gold or hybrid OA). We also analyze institutional differences and stratification in the APC costs paid in various journals. Authors affiliated with higher-ranked institutions, as well as hospitals and non-profit organizations pay relatively higher APCs for gold and hybrid OA publications. Results suggest that authors affiliated with high-ranked universities and well-funded institutions tend to have more resources to choose pay options with publishing. Our research suggests new professional hierarchies developing in contemporary publishing, where various OA publishing options are becoming increasingly prominent. Just as there is stratification in institutional representation between different types of publishing access, there is also inequality within access types.

  14. Optimising preoperative risk stratification tools for prostate cancer using mpMRI

    Energy Technology Data Exchange (ETDEWEB)

    Reisaeter, Lars A.R.; Losnegaard, Are; Biermann, Martin; Roervik, Jarle [Haukeland University Hospital, Department of Radiology, Bergen (Norway); University of Bergen, Department of Clinical Medicine, Bergen (Norway); Fuetterer, Jurgen J. [Radboud University Nijmegen Medical Centre, Department of Radiology, Nijmegen (Netherlands); Nygaard, Yngve [Haukeland University Hospital, Department of Urology, Bergen (Norway); Monssen, Jan [Haukeland University Hospital, Department of Radiology, Bergen (Norway); Gravdal, Karsten [Haukeland University Hospital, Department of Pathology, Bergen (Norway); Halvorsen, Ole J.; Akslen, Lars A. [Haukeland University Hospital, Department of Pathology, Bergen (Norway); Centre for Cancer Biomarkers CCBIO, Department of Clinical Medicine, University of Bergen (Norway); Haukaas, Svein; Beisland, Christian [University of Bergen, Department of Clinical Medicine, Bergen (Norway); Haukeland University Hospital, Department of Urology, Bergen (Norway)

    2018-03-15

    To improve preoperative risk stratification for prostate cancer (PCa) by incorporating multiparametric MRI (mpMRI) features into risk stratification tools for PCa, CAPRA and D'Amico. 807 consecutive patients operated on by robot-assisted radical prostatectomy at our institution during the period 2010-2015 were followed to identify biochemical recurrence (BCR). 591 patients were eligible for final analysis. We employed stepwise backward likelihood methodology and penalised Cox cross-validation to identify the most significant predictors of BCR including mpMRI features. mpMRI features were then integrated into image-adjusted (IA) risk prediction models and the two risk prediction tools were then evaluated both with and without image adjustment using receiver operating characteristics, survival and decision curve analyses. 37 patients suffered BCR. Apparent diffusion coefficient (ADC) and radiological extraprostatic extension (rEPE) from mpMRI were both significant predictors of BCR. Both IA prediction models reallocated more than 20% of intermediate-risk patients to the low-risk group, reducing their estimated cumulative BCR risk from approximately 5% to 1.1%. Both IA models showed improved prognostic performance with a better separation of the survival curves. Integrating ADC and rEPE from mpMRI of the prostate into risk stratification tools improves preoperative risk estimation for BCR. (orig.)

  15. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  16. Numerical analysis of stratification and destratification processes in a tidally energetic inlet with an ebb tidal delta

    NARCIS (Netherlands)

    Purkiani, K.; Becherer, J.; Flöser, G.; Gräwe, U.; Mohrholz, V.; Schuttelaars, H.M.; Burchard, H.

    2015-01-01

    Stratification and destratification processes in a tidally energetic, weakly stratified inlet in the Wadden Sea (south eastern North Sea) are investigated in this modeling study. Observations of current velocity and vertical density structure show strain-induced periodic stratification for the

  17. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  18. Improving Clinical Risk Stratification at Diagnosis in Primary Prostate Cancer: A Prognostic Modelling Study.

    Directory of Open Access Journals (Sweden)

    Vincent J Gnanapragasam

    2016-08-01

    Full Text Available Over 80% of the nearly 1 million men diagnosed with prostate cancer annually worldwide present with localised or locally advanced non-metastatic disease. Risk stratification is the cornerstone for clinical decision making and treatment selection for these men. The most widely applied stratification systems use presenting prostate-specific antigen (PSA concentration, biopsy Gleason grade, and clinical stage to classify patients as low, intermediate, or high risk. There is, however, significant heterogeneity in outcomes within these standard groupings. The International Society of Urological Pathology (ISUP has recently adopted a prognosis-based pathological classification that has yet to be included within a risk stratification system. Here we developed and tested a new stratification system based on the number of individual risk factors and incorporating the new ISUP prognostic score.Diagnostic clinicopathological data from 10,139 men with non-metastatic prostate cancer were available for this study from the Public Health England National Cancer Registration Service Eastern Office. This cohort was divided into a training set (n = 6,026; 1,557 total deaths, with 462 from prostate cancer and a testing set (n = 4,113; 1,053 total deaths, with 327 from prostate cancer. The median follow-up was 6.9 y, and the primary outcome measure was prostate-cancer-specific mortality (PCSM. An external validation cohort (n = 1,706 was also used. Patients were first categorised as low, intermediate, or high risk using the current three-stratum stratification system endorsed by the National Institute for Health and Care Excellence (NICE guidelines. The variables used to define the groups (PSA concentration, Gleason grading, and clinical stage were then used to sub-stratify within each risk category by testing the individual and then combined number of risk factors. In addition, we incorporated the new ISUP prognostic score as a discriminator. Using this approach, a

  19. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  20. Improving risk-stratification of Diabetes complications using temporal data mining.

    Science.gov (United States)

    Sacchi, Lucia; Dagliati, Arianna; Segagni, Daniele; Leporati, Paola; Chiovato, Luca; Bellazzi, Riccardo

    2015-01-01

    To understand which factor trigger worsened disease control is a crucial step in Type 2 Diabetes (T2D) patient management. The MOSAIC project, funded by the European Commission under the FP7 program, has been designed to integrate heterogeneous data sources and provide decision support in chronic T2D management through patients' continuous stratification. In this work we show how temporal data mining can be fruitfully exploited to improve risk stratification. In particular, we exploit administrative data on drug purchases to divide patients in meaningful groups. The detection of drug consumption patterns allows stratifying the population on the basis of subjects' purchasing attitude. Merging these findings with clinical values indicates the relevance of the applied methods while showing significant differences in the identified groups. This extensive approach emphasized the exploitation of administrative data to identify patterns able to explain clinical conditions.

  1. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Science.gov (United States)

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately

  2. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  3. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  4. Analysis of agreement between cardiac risk stratification protocols applied to participants of a center for cardiac rehabilitation

    Directory of Open Access Journals (Sweden)

    Ana A. S. Santos

    2016-01-01

    Full Text Available ABSTRACT Background Cardiac risk stratification is related to the risk of the occurrence of events induced by exercise. Despite the existence of several protocols to calculate risk stratification, studies indicating that there is similarity between these protocols are still unknown. Objective To evaluate the agreement between the existing protocols on cardiac risk rating in cardiac patients. Method The records of 50 patients from a cardiac rehabilitation program were analyzed, from which the following information was extracted: age, sex, weight, height, clinical diagnosis, medical history, risk factors, associated diseases, and the results from the most recent laboratory and complementary tests performed. This information was used for risk stratification of the patients in the protocols of the American College of Sports Medicine, the Brazilian Society of Cardiology, the American Heart Association, the protocol designed by Frederic J. Pashkow, the American Association of Cardiovascular and Pulmonary Rehabilitation, the Société Française de Cardiologie, and the Sociedad Española de Cardiología. Descriptive statistics were used to characterize the sample and the analysis of agreement between the protocols was calculated using the Kappa coefficient. Differences were considered with a significance level of 5%. Results Of the 21 analyses of agreement, 12 were considered significant between the protocols used for risk classification, with nine classified as moderate and three as low. No agreements were classified as excellent. Different proportions were observed in each risk category, with significant differences between the protocols for all risk categories. Conclusion The agreements between the protocols were considered low and moderate and the risk proportions differed between protocols.

  5. Noninvasive risk stratification for sudden death in asymptomatic patients with Wolff-Parkinson-White syndrome.

    Science.gov (United States)

    Novella, John; DeBiasi, Ralph M; Coplan, Neil L; Suri, Ranji; Keller, Seth

    2014-01-01

    Sudden cardiac death (SCD) as the first clinical manifestation of Wolff-Parkinson-White (WPW) syndrome is a well-documented, although rare occurrence. The incidence of SCD in patients with WPW ranges from 0% to 0.39% annually. Controversy exists regarding risk stratification for patients with preexcitation on surface electrocardiogram (ECG), particularly in those who are asymptomatic. This article focuses on the role of risk stratification using exercise and pharmacologic testing in patients with WPW pattern on ECG.

  6. Simulation of Thermal Stratification in BWR Suppression Pools with One Dimensional Modeling Method

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Ling Zou; Hongbin Zhang

    2014-01-01

    The suppression pool in a boiling water reactor (BWR) plant not only is the major heat sink within the containment system, but also provides the major emergency cooling water for the reactor core. In several accident scenarios, such as a loss-of-coolant accident and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; the pool temperature distribution also affects the NPSHa (available net positive suction head) and therefore the performance of the Emergency Core Cooling System and Reactor Core Isolation Cooling System pumps that draw cooling water back to the core. Current safety analysis codes use zero dimensional (0-D) lumped parameter models to calculate the energy and mass balance in the pool; therefore, they have large uncertainties in the prediction of scenarios in which stratification and mixing are important. While three-dimensional (3-D) computational fluid dynamics (CFD) methods can be used to analyze realistic 3-D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, resulting in a long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code (Berkeley mechanistic MIXing code in C++) has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by one-dimensional (1-D) transient partial differential equations and substructures (such as free or wall jets) are modeled with 1-D integral models. This allows very large reductions in computational effort compared to multi-dimensional CFD modeling. One heat-up experiment performed at the Finland POOLEX facility, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, is used for

  7. Thermal stratification in a hot water tank established by heat loss from the tank

    DEFF Research Database (Denmark)

    Fan, Jianhua; Furbo, Simon

    2012-01-01

    This paper presents numerical investigations of thermal stratification in a vertical cylindrical hot water tank established by standby heat loss from the tank. The transient fluid flow and heat transfer in the tank during cooling caused by standby heat loss are calculated by means of validated...... computational fluid dynamics (CFD) models. The measured heat loss coefficient for the different parts of the tank is used as input to the CFD model. Parametric studies are carried out using the validated models to investigate the influence on thermal stratification of the tank by the downward flow...... the heat loss from the tank sides will be distributed at different levels of the tank at different thermal conditions. The results show that 20–55% of the side heat loss drops to layers below in the part of the tank without the presence of thermal stratification. A heat loss removal factor is introduced...

  8. Thermal stratification in a hot water tank established by heat loss from the tank

    DEFF Research Database (Denmark)

    Fan, Jianhua; Furbo, Simon

    2009-01-01

    Results of experimental and numerical investigations of thermal stratification and natural convection in a vertical cylindrical hot water tank during standby periods are presented. The transient fluid flow and heat transfer in the tank during cooling caused by heat loss are investigated...... on the natural buoyancy resulting in downward flow along the tank side walls due to heat loss of the tank and the influence on thermal stratification of the tank by the downward flow and the corresponding upward flow in the central parts of the tank. Water temperatures at different levels of the tank...... by computational fluid dynamics (CFD) calculations and by thermal measurements. A tank with uniform temperatures and thermal stratification is studied. The distribution of the heat loss coefficient for the different parts of the tank is measured by tests and used as input to the CFD model. The investigations focus...

  9. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    Science.gov (United States)

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit

  10. Educational stratification in cultural participation: Cognitive competence or status motivation?

    NARCIS (Netherlands)

    Notten, N.; Bol, Th.; van de Werfhorst, H.G.; Ganzeboom, H.B.G.

    2015-01-01

    This article examines educational stratification in highbrow cultural participation. There are two contrasting explanations of why cultural participation is stratified. The status hypothesis predicts that people come to appreciate particular forms of art because it expresses their belonging to a

  11. Educational stratification in cultural participation: cognitive competence or status motivation?

    NARCIS (Netherlands)

    Notten, N.; Lancee, B.; van de Werfhorst, H.G.; Ganzeboom, H.B.G.

    2015-01-01

    This article examines educational stratification in highbrow cultural participation. There are two contrasting explanations of why cultural participation is stratified. The status hypothesis predicts that people come to appreciate particular forms of art because it expresses their belonging to a

  12. Viral lysis of marine microbes in relation to vertical stratification

    NARCIS (Netherlands)

    Mojica, K.D.A.

    2015-01-01

    Marine microorganisms represent the largest reservoir of living organic carbon in the ocean and collectively manage the pools and fluxes of nutrients and energy. Climate-induced increases in sea surface temperature and associated modifications to vertical stratification are affecting the structure

  13. Characterizing Urban Household Waste Generation and Metabolism Considering Community Stratification in a Rapid Urbanizing Area of China.

    Science.gov (United States)

    Xiao, Lishan; Lin, Tao; Chen, Shaohua; Zhang, Guoqin; Ye, Zhilong; Yu, Zhaowu

    2015-01-01

    The relationship between social stratification and municipal solid waste generation remains uncertain under current rapid urbanization. Based on a multi-object spatial sampling technique, we selected 191 households in a rapidly urbanizing area of Xiamen, China. The selected communities were classified into three types: work-unit, transitional, and commercial communities in the context of housing policy reform in China. Field survey data were used to characterize household waste generation patterns considering community stratification. Our results revealed a disparity in waste generation profiles among different households. The three community types differed with respect to family income, living area, religious affiliation, and homeowner occupation. Income, family structure, and lifestyle caused significant differences in waste generation among work-unit, transitional, and commercial communities, respectively. Urban waste generation patterns are expected to evolve due to accelerating urbanization and associated community transition. A multi-scale integrated analysis of societal and ecosystem metabolism approach was applied to waste metabolism linking it to particular socioeconomic conditions that influence material flows and their evolution. Waste metabolism, both pace and density, was highest for family structure driven patterns, followed by lifestyle and income driven. The results will guide community-specific management policies in rapidly urbanizing areas.

  14. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  15. Inferences about Variance Components and Reliability-Generalizability Coefficients in the Absence of Random Sampling.

    Science.gov (United States)

    Kane, Michael

    2002-01-01

    Reviews the criticisms of sampling assumptions in generalizability theory (and in reliability theory) and examines the feasibility of using representative sampling, stratification, homogeneity assumptions, and replications to address these criticisms. Suggests some general outlines for the conduct of generalizability theory studies. (SLD)

  16. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  17. Quantitative stratification of diffuse parenchymal lung diseases.

    Directory of Open Access Journals (Sweden)

    Sushravya Raghunath

    Full Text Available Diffuse parenchymal lung diseases (DPLDs are characterized by widespread pathological changes within the pulmonary tissue that impair the elasticity and gas exchange properties of the lungs. Clinical-radiological diagnosis of these diseases remains challenging and their clinical course is characterized by variable disease progression. These challenges have hindered the introduction of robust objective biomarkers for patient-specific prediction based on specific phenotypes in clinical practice for patients with DPLD. Therefore, strategies facilitating individualized clinical management, staging and identification of specific phenotypes linked to clinical disease outcomes or therapeutic responses are urgently needed. A classification schema consistently reflecting the radiological, clinical (lung function and clinical outcomes and pathological features of a disease represents a critical need in modern pulmonary medicine. Herein, we report a quantitative stratification paradigm to identify subsets of DPLD patients with characteristic radiologic patterns in an unsupervised manner and demonstrate significant correlation of these self-organized disease groups with clinically accepted surrogate endpoints. The proposed consistent and reproducible technique could potentially transform diagnostic staging, clinical management and prognostication of DPLD patients as well as facilitate patient selection for clinical trials beyond the ability of current radiological tools. In addition, the sequential quantitative stratification of the type and extent of parenchymal process may allow standardized and objective monitoring of disease, early assessment of treatment response and mortality prediction for DPLD patients.

  18. Quantitative Stratification of Diffuse Parenchymal Lung Diseases

    Science.gov (United States)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Maldonado, Fabien; Peikert, Tobias; Moua, Teng; Ryu, Jay H.; Bartholmai, Brian J.; Robb, Richard A.

    2014-01-01

    Diffuse parenchymal lung diseases (DPLDs) are characterized by widespread pathological changes within the pulmonary tissue that impair the elasticity and gas exchange properties of the lungs. Clinical-radiological diagnosis of these diseases remains challenging and their clinical course is characterized by variable disease progression. These challenges have hindered the introduction of robust objective biomarkers for patient-specific prediction based on specific phenotypes in clinical practice for patients with DPLD. Therefore, strategies facilitating individualized clinical management, staging and identification of specific phenotypes linked to clinical disease outcomes or therapeutic responses are urgently needed. A classification schema consistently reflecting the radiological, clinical (lung function and clinical outcomes) and pathological features of a disease represents a critical need in modern pulmonary medicine. Herein, we report a quantitative stratification paradigm to identify subsets of DPLD patients with characteristic radiologic patterns in an unsupervised manner and demonstrate significant correlation of these self-organized disease groups with clinically accepted surrogate endpoints. The proposed consistent and reproducible technique could potentially transform diagnostic staging, clinical management and prognostication of DPLD patients as well as facilitate patient selection for clinical trials beyond the ability of current radiological tools. In addition, the sequential quantitative stratification of the type and extent of parenchymal process may allow standardized and objective monitoring of disease, early assessment of treatment response and mortality prediction for DPLD patients. PMID:24676019

  19. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  20. A Study on an Evaluation of PWR Piping Thermal Stratification

    Energy Technology Data Exchange (ETDEWEB)

    Lee, H.; Kim, B.N.; Lee, S.K.; Jeong, I.S.; Chjung, B.S.; Lee, S.H. [Korea Electric Power Research Institute, Taejeon (Korea, Republic of)

    1997-12-31

    This report presents the determination of thermal stratification phenomenon of pressurizer surge line for Kori unit No.4. With this regards, the integrity of related piping was evaluated by both various stress analysis and fatigue analysis. (author). 23 refs., 61 figs., 22 tabs.

  1. Clinical Studies in Risk Stratification & Therapy of Thoracic Aortic Disease

    NARCIS (Netherlands)

    Kamman, AV

    2017-01-01

    For this thesis we aimed to summarize outcomes and optimal treatment modality for thoracic aortic disease, discuss new imaging techniques and improve the use of current imaging modalities. Furthermore, we aimed to improve risk stratification for uncomplicated type B aortic dissection (TBAD) and

  2. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  3. Global cardiovascular risk stratification among hypertensive patients treated in a Family Health Unit of Parnaíba, Piauí - doi: 10.5020/18061230.2012.p287

    Directory of Open Access Journals (Sweden)

    Elce de Seixas Nascimento

    2012-11-01

    Full Text Available Objective: To stratify the global cardiovascular risk among hypertensive patients attended in a Family Health Unit (FHU. Methods: A quantitative, cross-sectional and descriptive study with population of hypertensive patients undergoing treatment in a FHU, module 34, in Parnaíba, Piauí, Brazil, in the period from July to August 2011. The sample consisted of 45 volunteers, selected by free demand conglomerate, who filled a form with questions that support the analysis and Global Cardiovascular Risk stratification (GCR, according to the VI Brazilian Guidelines on Hypertension (VI BGH - 2010, The European Society of Cardiology (ESC and European Society of Hypertension (ESH - 2007. The subjects were then submitted to measurement of blood pressure (BP, waist circumference (WC and body mass index (BMI. Results: The most evident risk factor in the sample was overweight/obesity in 75.5% (n=34, followed by sedentary lifestyle in 73.3% (n=33 and hypercholesterolemia in 55.5% (n=25. The data collected resulted in a stratification in which 84.4% (n=38 presented high added risk and 15.5% (n=7 a very high added risk of presenting cardiovascular events in the next 10 years. Conclusion: The stratification in the population studied indicated high incidence of such factors, pointing to the need of interfering in this population segment, in order to promote changes in lifestyle that generate prevention and control of cardiovascular diseases.

  4. Control Carbon to Prevent corium Stratification In-Vessel Retention

    Energy Technology Data Exchange (ETDEWEB)

    Go, A Ra; Hong, Seung Hyun; Kim, Sang Nyung [Kyung Hee Univ., Yongin (Korea, Republic of)

    2013-10-15

    As a result, the thermal margin decreases, and the nuclear reactor vessel may be destroyed. To control Carbons, which is the major cause of stratification, Ruthenium and Hafnium are inserted inside the lower reactor head which initiates a chemical reaction with Carbon. SPARTAN program is used to confirm a reaction probability which is measured in bond energy and strength etc. To analyze the possibility of bonding with Carbon, the initial property of Ruthenium and Carbon are measured during the calculated absorbing process. After following that theory, the Spartan program is able to determine if it can insert the metal. After verifying the combination of Ruthenium and Carbon, the Spartan program analyzes the impact of the Carbon to prevent the corium stratification. It determines the possibility of the success with the introduction of the IVR concept. Ruthenium is suitable to Carbon bonding process to decrease affect to corium behavior which do not form stratification. The metal which can combine with Carbon should be satisfied with temperature as high as 2800 .deg. C. Therefore, the further research works are determined by using the Spartan program to calculate the Carbon and Ruthenium bonding energy, and to check other bonding results as follows. After check the results, review this theory to insert the Ruthenium in reactor vessel. APR1400 and OPR1000, Korea Hydro and Nuclear power plant core meltdown accident has been evaluated a high level in severe accident. When the reactor core is melted down, it is stratified into the metal layer and the ceramic layer. As the heat conductivity of metal layer is higher than that of the ceramic layer, heat concentration occurs in the upper part of the bottom hemisphere which comes into contact with the metal layer.

  5. Water-Column Stratification Observed along an AUV-Tracked Isotherm

    Science.gov (United States)

    Zhang, Y.; Messié, M.; Ryan, J. P.; Kieft, B.; Stanway, M. J.; Hobson, B.; O'Reilly, T. C.; Raanan, B. Y.; Smith, J. M.; Chavez, F.

    2016-02-01

    Studies of marine physical, chemical and microbiological processes benefit from observing in a Lagrangian frame of reference, i.e. drifting with ambient water. Because these processes can be organized relative to specific density or temperature ranges, maintaining observing platforms within targeted environmental ranges is an important observing strategy. We have developed a novel method to enable a Tethys-class long-range autonomous underwater vehicle (AUV) (which has a propeller and a buoyancy engine) to track a target isotherm in buoyancy-controlled drift mode. In this mode, the vehicle shuts off its propeller and autonomously detects the isotherm and stays with it by actively controlling the vehicle's buoyancy. In the June 2015 CANON (Controlled, Agile, and Novel Observing Network) Experiment in Monterey Bay, California, AUV Makai tracked a target isotherm for 13 hours to study the coastal upwelling system. The tracked isotherm started from 33 m depth, shoaled to 10 m, and then deepened to 29 m. The thickness of the tracked isotherm layer (within 0.3°C error from the target temperature) increased over this duration, reflecting weakened stratification around the isotherm. During Makai's isotherm tracking, another long-range AUV, Daphne, acoustically tracked Makai on a circular yo-yo trajectory, measuring water-column profiles in Makai's vicinity. A wave glider also acoustically tracked Makai, providing sea surface measurements on the track. The presented method is a new approach for studying water-column stratification, but requires careful analysis of the temporal and spatial variations mingled in the vehicles' measurements. We will present a synthesis of the water column's stratification in relation to the upwelling conditions, based on the in situ measurements by the mobile platforms, as well as remote sensing and mooring data.

  6. Effect of stable stratification on dispersion within urban street canyons: A large-eddy simulation

    Science.gov (United States)

    Li, Xian-Xiang; Britter, Rex; Norford, Leslie K.

    2016-11-01

    This study employs a validated large-eddy simulation (LES) code with high tempo-spatial resolution to investigate the effect of a stably stratified roughness sublayer (RSL) on scalar transport within an urban street canyon. The major effect of stable stratification on the flow and turbulence inside the street canyon is that the flow slows down in both streamwise and vertical directions, a stagnant area near the street level emerges, and the vertical transport of momentum is weakened. Consequently, the transfer of heat between the street canyon and overlying atmosphere also gets weaker. The pollutant emitted from the street level 'pools' within the lower street canyon, and more pollutant accumulates within the street canyon with increasing stability. Under stable stratification, the dominant mechanism for pollutant transport within the street canyon has changed from ejections (flow carries high-concentration pollutant upward) to unorganized motions (flow carries high-concentration pollutant downward), which is responsible for the much lower dispersion efficiency under stable stratifications.

  7. Noninvasive risk stratification of lethal ventricular arrhythmias and sudden cardiac death after myocardial infarction

    Directory of Open Access Journals (Sweden)

    Kenji Yodogawa, MD

    2014-08-01

    Full Text Available Prediction of lethal ventricular arrhythmias leading to sudden cardiac death is one of the most important and challenging problems after myocardial infarction (MI. Identification of MI patients who are prone to ventricular tachyarrhythmias allows for an indication of implantable cardioverter-defibrillator placement. To date, noninvasive techniques such as microvolt T-wave alternans (MTWA, signal-averaged electrocardiography (SAECG, heart rate variability (HRV, and heart rate turbulence (HRT have been developed for this purpose. MTWA is an indicator of repolarization abnormality and is currently the most promising risk-stratification tool for predicting malignant ventricular arrhythmias. Similarly, late potentials detected by SAECG are indices of depolarization abnormality and are useful in risk stratification. However, the role of SAECG is limited because of its low predictive accuracy. Abnormal HRV and HRT patterns reflect autonomic disturbances, which may increase the risk of lethal ventricular arrhythmias, but the existing evidence is insufficient. Further studies of noninvasive assessment may provide a new insight into risk stratification in post-MI patients.

  8. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  9. Quantitative analysis of biogeochemically controlled density stratification in an iron-meromictic lake

    Science.gov (United States)

    Nixdorf, E.; Boehrer, B.

    2015-11-01

    Lake stratification controls the cycling of dissolved matter within the water body. This is of particular interest in the case of meromictic lakes, where permanent density stratification of the deep water limits vertical transport, and a chemically different (reducing) milieu can be established. As a consequence, the geochemical setting and the mixing regime of a lake can stabilize each other mutually. We attempt a quantitative approach to the contribution of chemical reactions sustaining the density stratification. As an example, we chose the prominent case of iron meromixis in Waldsee near Doebern, a small lake that originated from near-surface underground mining of lignite. From a data set covering 4 years of monthly measured electrical conductivity profiles, we calculated summed conductivity as a quantitative variable reflecting the amount of electro-active substances in the entire lake. Seasonal variations followed the changing of the chemocline height. Coinciding changes of electrical conductivities in the monimolimnion indicated that a considerable share of substances, precipitated by the advancing oxygenated epilimnion, re-dissolved in the remaining anoxic deep waters and contributed considerably to the density stratification. In addition, we designed a lab experiment, in which we removed iron compounds and organic material from monimolimnetic waters by introducing air bubbles. Precipitates could be identified by visual inspection. Eventually, the remaining solutes in the aerated water layer looked similar to mixolimnetic Waldsee water. Due to its reduced concentration of solutes, this water became less dense and remained floating on nearly unchanged monimolimnetic water. In conclusion, iron meromixis as seen in Waldsee did not require two different sources of incoming waters, but the inflow of iron-rich deep groundwater and the aeration through the lake surface were fully sufficient for the formation of iron meromixis.

  10. MULTIGENERATIONAL ASPECTS OF SOCIAL STRATIFICATION: ISSUES FOR FURTHER RESEARCH.

    Science.gov (United States)

    Mare, Robert D

    2014-03-01

    The articles in this special issue show the vitality and progress of research on multigenerational aspects of social mobility, stratification, and inequality. The effects of the characteristics and behavior of grandparents and other kin on the statuses, resources, and positions of their descendants are best viewed in a demographic context. Intergenerational effects work through both the intergenerational associations of socioeconomic characteristics and also differential fertility and mortality. A combined socioeconomic and demographic framework informs a research agenda which addresses the following issues: how generational effects combine with variation in age, period, and cohort within each generation; distinguishing causal relationships across generations from statistical associations; how multigenerational effects vary across socioeconomic hierarchies, including the possibility of stronger effects at the extreme top and bottom; distinguishing between endowments and investments in intergenerational effects; multigenerational effects on associated demographic behaviors and outcomes (especially fertility and mortality); optimal tradeoffs among diverse types of data on multigenerational processes; and the variability across time and place in how kin, education, and other institutions affect stratification.

  11. Stratification-induced order--disorder phase transitions in molecularly thin confined films

    International Nuclear Information System (INIS)

    Schoen, M.; Diestler, D.J.; Cushman, J.H.

    1994-01-01

    By means of grand canonical ensemble Monte Carlo simulations of a monatomic film confined between unstructured (i.e., molecularly smooth) rigidly fixed solid surfaces (i.e., walls), we investigate the mechanism of molecular stratification, i.e., the tendency of atoms to arrange themselves in layers parallel with the walls. Stratification is accompanied by a heretofore unnoticed order--disorder phase transition manifested as a maximum in density fluctuations at the transition point. The transition involves phases with different transverse packing characteristics, although the number of layers accommodated between the walls remains unchanged during the transition, which occurs periodically as the film thickens. However, with increasing thickness, an increasingly smaller proportion of the film is structurally affected by the transition. Thus, the associated maximum in density fluctuations diminishes rapidly with film thickness

  12. Viral lysis of marine microbes in relation to vertical stratification

    NARCIS (Netherlands)

    Mojica, K.D.A.

    2015-01-01

    The overall aim of this thesis is to investigate how changes in vertical stratification affect autotrophic and heterotrophic microbial communities along a meridional gradient in the Atlantic Ocean. The Northeast Atlantic Ocean is a key area in global ocean circulation and a important sink for

  13. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  14. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  15. Thermodynamic analysis for molten stratification test MASCA with ionic liquid U-Zr-Fe-O-B-C-FPs database

    International Nuclear Information System (INIS)

    Fukasawa, Masanori; Tamura, Shigeyuki

    2007-01-01

    The molten corium stratification tested in the OECD MASCA project was analyzed with our thermo-dynamic database and the database was verified to be effective for the stratification analysis. The MASCA test shows that the molten corium can be stratified with the metal layer under the oxide when sub-oxidized corium including iron was retained in the lower head of the reactor vessel. This stratification is caused by the increased density of the metal layer attributed to a transfer of uranium metal that was reduced from uranium oxide by zirconium. Thermodynamic equilibrium calculations with the database, which was developed for the corium U-Zr-Fe-O-B-C-FPs system using the ionic two-sublattice model for liquid, show quantitative agreements with the MASCA test, such as the composition of each layer, fission product (FP) partitioning between the layers and B 4 C effect on the stratification. (author)

  16. Thermal stratification of sodium in the BN 600 reactor

    International Nuclear Information System (INIS)

    Obmelukhin, J.A.; Obukhov, P.I.; Rinejskij, A.A.; Sobolev, V.A.; Sherbakov, S.I.

    1983-01-01

    The signs of thermal stratification of sodium in the BN 600 reactor upper plenum revealed by the analysis of standard temperature sensors' readings are defined. The initial conditions for existence of different temperature sodium layers are given. Two approaches for realizing on a computer of equations describing sodium motion in the upper plenum of the reactor are presented. (author)

  17. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  18. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    International Nuclear Information System (INIS)

    Haugboel, Steven; Pinborg, Lars H.; Arfan, Haroon M.; Froekjaer, Vibe M.; Svarer, Claus; Knudsen, Gitte M.; Madsen, Jacob; Dyrby, Tim B.

    2007-01-01

    To determine the reproducibility of measurements of brain 5-HT 2A receptors with an [ 18 F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects reproducibility and the required sample size. For assessment of the variability, six subjects were investigated with [ 18 F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [ 18 F]altanserin PET studies in 84 healthy subjects. Regions of interest were automatically delineated on co-registered MR and PET images. In cortical brain regions with a high density of 5-HT 2A receptors, the outcome parameter (binding potential, BP 1 ) showed high reproducibility, with a median difference between the two group measurements of 6% (range 5-12%), whereas in regions with a low receptor density, BP 1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. This study demonstrates that [ 18 F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT 2A receptor density. Moreover, partial volume correction considerably reduces the sample size required to detect regional changes between groups. (orig.)

  19. Calculation of Local Stress and Fatigue Resistance due to Thermal Stratification on Pressurized Surge Line Pipe

    Science.gov (United States)

    Bandriyana, B.; Utaja

    2010-06-01

    Thermal stratification introduces thermal shock effect which results in local stress and fatique problems that must be considered in the design of nuclear power plant components. Local stress and fatique calculation were performed on the Pressurize Surge Line piping system of the Pressurize Water Reactor of the Nuclear Power Plant. Analysis was done on the operating temperature between 177 to 343° C and the operating pressure of 16 MPa (160 Bar). The stagnant and transient condition with two kinds of stratification model has been evaluated by the two dimensional finite elements method using the ANSYS program. Evaluation of fatigue resistance is developed based on the maximum local stress using the ASME standard Code formula. Maximum stress of 427 MPa occurred at the upper side of the top half of hot fluid pipe stratification model in the transient case condition. The evaluation of the fatigue resistance is performed on 500 operating cycles in the life time of 40 years and giving the usage value of 0,64 which met to the design requirement for class 1 of nuclear component. The out surge transient were the most significant case in the localized effects due to thermal stratification.

  20. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  1. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  2. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  3. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    International Nuclear Information System (INIS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-01-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers

  4. Absence of internal tidal beams due to non-uniform stratification

    NARCIS (Netherlands)

    Gerkema, T.; van Haren, H.

    2012-01-01

    A linear internal-tide generation model is applied to the Faeroe–Shetland Channel, using observed profiles of stratification. Several degrees of simplification are considered: 1) uniform, i.e. constant N; 2) vertically varying N (z); 3) the full N(x, z) and associated geostrophic background flows.

  5. Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions

    International Nuclear Information System (INIS)

    John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.

    2000-01-01

    Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was

  6. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  7. Study of thermal stratification and mixing using PIV

    International Nuclear Information System (INIS)

    Yamaji, B.; Szijarto, R.; Aszodi, A.

    2010-01-01

    Paks Nuclear Power Plant uses the REMIX code for the calculation of the coolant mixing in case of the use of high pressure injection system while stagnating flow is present. The use of the code for Russian type WWER-440 reactors needs strict conservative approach, and in several cases the accuracy and the reserves to safety margins cannot be determined now. In order to quantify and improve these characteristics experimental validation of the code is needed. An experimental program has been launched at Institute of Nuclear Techniques with the aim of investigating thermal stratification processes and the mixing of plumes in simple geometries. With the comparison and evaluation of measurement and computational fluid dynamics result computational models can be validated. For the experiments a simple hexahedral plexiglas tank (250 x 500 x 100 mm - H x L x D) was fabricated with five nozzles attached, which can be set up as inlets or outlets. With different inlet and outlet setups and temperature differences thermal stratification, plume mixing may be investigated using Particle Image Velocimetry. In the paper comparison of Particle Image Velocimetry measurements carried out on the plexiglas tank and the results of simulations will be presented. For the calculations the ANSYS CFX three-dimensional computational fluid dynamics code was used. (Authors)

  8. The influence of temperature stratification on the thermal performance of a dry cooling tower with natural draught

    International Nuclear Information System (INIS)

    Buxmann, J.

    1977-01-01

    The cooling effect of a cooling tower is noticeably changed, if in its surroundings there exists a temperature stratification which is different from the adiabatic temperature stratification. The design data are investigated which have an influence on the heat rating and the total temperature difference at various temperature gradients in the air. (orig.) [de

  9. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  10. Qinshan phase II extension nuclear power project thermal stratification and fatigue stress analysis for pressurizer surge line

    International Nuclear Information System (INIS)

    Yu Xiaofei; Zhang Yixiong; Ai Honglei

    2010-01-01

    Thermal stratification of pressurizer surge line induced by the inside fluid brings on global bending moments, local thermal stresses, unexpected displacements and support loadings of the pipe system. In order to avoid a costly three-dimensional computation, a combined 1D/2D technique has been developed and implemented to analyze the thermal stratification and fatigue stress of pressurize surge line of QINSHAN Phase II Extension Nuclear Power Project in this paper, using the computer codes SYSTUS and ROCOCO. According to the mechanical analysis results of stratification, the maximum stress and cumulative usage factor, the loadings at connections of surge line to main pipe and RCP and the displacements of surge line at supports are obtained. (authors)

  11. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Coherent structure in geostrophic flow under density stratification; Mippei seisoka ni aru chikoryu no soshiki kozo

    Energy Technology Data Exchange (ETDEWEB)

    Tsujimura, S.; Iida, O.; Nagano, Y. [Nagoya Institute of Technology, Nagoya (Japan)

    1998-10-25

    The coherent structure and relevant heat transport in geostrophic flows under various density stratification has been studied by using both direct numerical simulation and rapid distortion theory. It is found that in a neutrally stratified flow under system rotation, the temperature fluctuations become very close to two-dimensional and their variation is very small in the direction parallel to the axis of rotation. Under the stable stratification, the velocity and temperature fluctuations tend to oscillate with the Brunt-Vaisala frequency. Under the unstable stratification, on the other hand, vortex columns are formed in the direction parallel to the axis of rotation. However, the generation of the elongated vortex columns cannot be predicted by the rapid distortion theory. The non-linear term is required to generate these characteristic vortex columns. 11 refs., 18 figs., 1 tab.

  13. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  14. Perceptions of Risk Stratification Workflows in Primary Care

    Directory of Open Access Journals (Sweden)

    Rachel L. Ross

    2017-10-01

    Full Text Available Risk stratification (RS in primary care is frequently used by policy-makers, payers, and health systems; the process requires risk assessment for adverse health outcomes across a population to assign patients into risk tiers and allow care management (CM resources to be targeted effectively. Our objective was to understand the approach to and perception of RS in primary care practices. An online survey was developed, tested, and administered to 148 representatives of 37 primary care practices engaged in RS varying in size, location and ownership. The survey assessed practices’ approach to, perception of, and confidence in RS, and its effect on subsequent CM activities. We examined psychometric properties of the survey to determine validity and conducted chi-square analyses to determine the association between practice characteristics and confidence and agreement with risk scores. The survey yielded a 68% response rate (100 respondents. Overall, participants felt moderately confident in their risk scores (range 41–53.8%, and moderately to highly confident in their subsequent CM workflows (range 46–68%. Respondents from small and independent practices were more likely to have higher confidence and agreement with their RS approaches and scores (p < 0.01. Confidence levels were highest, however, when practices incorporated human review into their RS processes (p < 0.05. This trend was not affected by respondents’ professional roles. Additional work from a broad mixed-methods effort will add to our understanding of RS implementation processes and outcomes.

  15. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    Science.gov (United States)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  16. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    Science.gov (United States)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM

  18. 205_WS: Improving the Delivery of Primary Care Through Risk Stratification

    DEFF Research Database (Denmark)

    Kinder, Karen; Kristensen, Troels; Abrams, Chad

    . Content The workshop will open with an introductory presentation on the numerous applications of risk stratification within the integrated and primary care sectors. The workshop will then focus on individual sessions based on three applications: – Case Management. – Improving Coordination...

  19. Preconditioning of Antarctic maximum sea-ice extent by upper-ocean stratification on a seasonal timescale

    OpenAIRE

    Su, Zhan

    2017-01-01

    This study uses an observationally constrained and dynamically consistent ocean and sea ice state estimate. The author presents a remarkable agreement between the location of the edge of Antarctic maximum sea ice extent, reached in September, and the narrow transition band for the upper ocean (0–100 m depths) stratification, as early as April to June. To the south of this edge, the upper ocean has high stratification, which forbids convective fluxes to cross through; consequently, the ocean h...

  20. Simulation of containment atmosphere stratification experiment using local instantaneous description

    International Nuclear Information System (INIS)

    Babic, M.; Kljenak, I.

    2004-01-01

    An experiment on mixing and stratification in the atmosphere of a nuclear power plant containment at accident conditions was simulated with the CFD code CFX4.4. The original experiment was performed in the TOSQAN experimental facility. Simulated nonhomogeneous temperature, species concentration and velocity fields are compared to experimental results. (author)