WorldWideScience

Sample records for sample size selection

  1. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  2. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  3. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  4. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  5. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  6. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  7. Uniform deposition of size-selected clusters using Lissajous scanning

    International Nuclear Information System (INIS)

    Beniya, Atsushi; Watanabe, Yoshihide; Hirata, Hirohito

    2016-01-01

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt n (n = 7, 15, 20) clusters uniformly deposited on the Al 2 O 3 /NiAl(110) surface and demonstrated the importance of uniform deposition.

  8. Uniform deposition of size-selected clusters using Lissajous scanning

    Energy Technology Data Exchange (ETDEWEB)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192 (Japan); Hirata, Hirohito [Toyota Motor Corporation, 1200 Mishuku, Susono, Shizuoka 410-1193 (Japan)

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.

  9. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  10. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Perspective: Size selected clusters for catalysis and electrochemistry

    Science.gov (United States)

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; Vajda, Stefan

    2018-03-01

    Size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization, and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition, cluster-support interactions, and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modeling based on density functional theory sampling of local minima and energy barriers or ab initio molecular dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Finally, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.

  12. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  13. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  14. Model catalysis by size-selected cluster deposition

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  15. Compressors selection and sizing

    CERN Document Server

    Brown, Royce N

    2005-01-01

    This practical reference provides in-depth information required to understand and properly estimate compressor capabilities and to select the proper designs. Engineers and students will gain a thorough understanding of compression principles, equipment, applications, selection, sizing, installation, and maintenance. The many examples clearly illustrate key aspects to help readers understand the ""real world"" of compressor technology.Compressors: Selection and Sizing, third edition is completely updated with new API standards. Additions requested by readers include a new section on di

  16. Synthesis and magnetic properties of size-selected CoPt nanoparticles

    International Nuclear Information System (INIS)

    Tournus, F.; Blanc, N.; Tamion, A.; Hillenkamp, M.; Dupuis, V.

    2011-01-01

    CoPt nanoparticles are widely studied, in particular for their potentially very high magnetic anisotropy. However, their magnetic properties can differ from the bulk ones and they are expected to vary with the particle size. In this paper, we report the synthesis and characterization of well-defined CoPt nanoparticle samples produced in ultrahigh vacuum conditions following a physical route: the mass-selected low energy cluster beam deposition technique. This approach relies on an electrostatic deviation of ionized clusters which allows us to easily adjust the particle size, independently from the deposited equivalent thickness (i.e. the surface or volume particle density in a sample). Diluted samples made of CoPt particles, with different diameters, embedded in amorphous carbon are studied by transmission electron microscopy and superconducting interference device magnetometry, which gives access to the magnetic anisotropy energy distribution. We then compare the magnetic properties of two different particle sizes. The results are found to be consistent with an anisotropy constant (including its distribution) which does not evolve with the particle size in the range considered. - Highlights: → Samples of mass-selected CoPt nanoparticles are synthesized by an original physical method. → The magnetic properties of two different particle sizes are compared. → The anisotropy constant (including its dispersion) does not evolve in the range considered. → These results illustrate some invariance properties of ZFC curves.

  17. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  18. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  19. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  20. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  1. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  2. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  4. 40 CFR 205.57-2 - Test vehicle sample selection.

    Science.gov (United States)

    2010-07-01

    ... pursuant to a test request in accordance with this subpart will be selected in the manner specified in the... then using a table of random numbers to select the number of vehicles as specified in paragraph (c) of... with the desig-nated AQL are contained in Appendix I, -Table II. (c) The appropriate batch sample size...

  5. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  6. The effect of size-selective samplers (cyclones) on XRD response

    CSIR Research Space (South Africa)

    Pretorius, CJ

    2011-07-01

    Full Text Available The study evaluated five size-selective samplers used in the South African mining industry to determine how their performance affects the X-ray powder diffraction (XRD) response when respirable dust samples are analysed for quartz using direct...

  7. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  8. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  9. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  10. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  11. Size-selective separation of polydisperse gold nanoparticles in supercritical ethane.

    Science.gov (United States)

    Williams, Dylan P; Satherley, John

    2009-04-09

    The aim of this study was to use supercritical ethane to selectively disperse alkanethiol-stabilized gold nanoparticles of one size from a polydisperse sample in order to recover a monodisperse fraction of the nanoparticles. A disperse sample of metal nanoparticles with diameters in the range of 1-5 nm was prepared using established techniques then further purified by Soxhlet extraction. The purified sample was subjected to supercritical ethane at a temperature of 318 K in the pressure range 50-276 bar. Particles were characterized by UV-vis absorption spectroscopy, TEM, and MALDI-TOF mass spectroscopy. The results show that with increasing pressure the dispersibility of the nanoparticles increases, this effect is most pronounced for smaller nanoparticles. At the highest pressure investigated a sample of the particles was effectively stripped of all the smaller particles leaving a monodisperse sample. The relationship between dispersibility and supercritical fluid density for two different size samples of alkanethiol-stabilized gold nanoparticles was considered using the Chrastil chemical equilibrium model.

  12. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  13. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  14. U-Mo Alloy Powder Obtained Through Selective Hydriding. Particle Size Control

    International Nuclear Information System (INIS)

    Balart, S.N.; Bruzzoni, P.; Granovsky, M.S.

    2002-01-01

    Hydride-dehydride methods to obtain U-Mo alloy powder for high-density fuel elements have been successfully tested by different authors. One of these methods is the selective hydriding of the α phase (HSα). In the HSα method, a key step is the partial decomposition of the γ phase (retained by quenching) to α phase and an enriched γ phase or U 2 Mo. This transformation starts mainly at grain boundaries. Subsequent hydrogenation of this material leads to selective hydriding of the α phase, embrittlement and intergranular fracture. According to this picture, the particle size of the final product should be related to the γ grain size of the starting alloy. The feasibility of controlling the particle size of the product by changing the γ grain size of the starting alloy is currently investigated. In this work an U-7 wt% Mo alloy was subjected to various heat treatments in order to obtain different grain sizes. The results on the powder particle size distribution after applying the HSα method to these samples show that there is a strong correlation between the original γ grain size and the particle size distribution of the powder. (author)

  15. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  16. Scale-Dependent Habitat Selection and Size-Based Dominance in Adult Male American Alligators.

    Directory of Open Access Journals (Sweden)

    Bradley A Strickland

    Full Text Available Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17 on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their

  17. Scale-dependent habitat selection and size-based dominance in adult male American alligators

    Science.gov (United States)

    Strickland, Bradley A.; Vilella, Francisco; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance

  18. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  19. Evaluation of pump pulsation in respirable size-selective sampling: Part III. Investigation of European standard methods.

    Science.gov (United States)

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin

    2014-10-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods

  20. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  1. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  2. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  4. Effect of field view size and lighting on unique-hue selection using Natural Color System object colors.

    Science.gov (United States)

    Shamey, Renzo; Zubair, Muhammad; Cheema, Hammad

    2015-08-01

    The aim of this study was twofold, first to determine the effect of field view size and second of illumination conditions on the selection of unique hue samples (UHs: R, Y, G and B) from two rotatable trays, each containing forty highly chromatic Natural Color System (NCS) samples, on one tray corresponding to 1.4° and on the other to 5.7° field of view size. UH selections were made by 25 color-normal observers who repeated assessments three times with a gap of at least 24h between trials. Observers separately assessed UHs under four illumination conditions simulating illuminants D65, A, F2 and F11. An apparent hue shift (statistically significant for UR) was noted for UH selections at 5.7° field of view compared to those at 1.4°. Observers' overall variability was found to be higher for UH stimuli selections at the larger field of view. Intra-observer variability was found to be approximately 18.7% of inter-observer variability in selection of samples for both sample sizes. The highest intra-observer variability was under simulated illuminant D65, followed by A, F11, and F2. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Estimation of the ancestral effective population sizes of African great apes under different selection regimes.

    Science.gov (United States)

    Schrago, Carlos G

    2014-08-01

    Reliable estimates of ancestral effective population sizes are necessary to unveil the population-level phenomena that shaped the phylogeny and molecular evolution of the African great apes. Although several methods have previously been applied to infer ancestral effective population sizes, an analysis of the influence of the selective regime on the estimates of ancestral demography has not been thoroughly conducted. In this study, three independent data sets under different selective regimes were used were composed to tackle this issue. The results showed that selection had a significant impact on the estimates of ancestral effective population sizes of the African great apes. The inference of the ancestral demography of African great apes was affected by the selection regime. The effects, however, were not homogeneous along the ancestral populations of great apes. The effective population size of the ancestor of humans and chimpanzees was more impacted by the selection regime when compared to the same parameter in the ancestor of humans, chimpanzees and gorillas. Because the selection regime influenced the estimates of ancestral effective population size, it is reasonable to assume that a portion of the discrepancy found in previous studies that inferred the ancestral effective population size may be attributable to the differential action of selection on the genes sampled.

  6. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Fluctuating survival selection explains variation in avian group size.

    Science.gov (United States)

    Brown, Charles R; Brown, Mary Bomberger; Roche, Erin A; O'Brien, Valerie A; Page, Catherine E

    2016-05-03

    Most animal groups vary extensively in size. Because individuals in certain sizes of groups often have higher apparent fitness than those in other groups, why wide group size variation persists in most populations remains unexplained. We used a 30-y mark-recapture study of colonially breeding cliff swallows (Petrochelidon pyrrhonota) to show that the survival advantages of different colony sizes fluctuated among years. Colony size was under both stabilizing and directional selection in different years, and reversals in the sign of directional selection regularly occurred. Directional selection was predicted in part by drought conditions: birds in larger colonies tended to be favored in cooler and wetter years, and birds in smaller colonies in hotter and drier years. Oscillating selection on colony size likely reflected annual differences in food availability and the consequent importance of information transfer, and/or the level of ectoparasitism, with the net benefit of sociality varying under these different conditions. Averaged across years, there was no net directional change in selection on colony size. The wide range in cliff swallow group size is probably maintained by fluctuating survival selection and represents the first case, to our knowledge, in which fitness advantages of different group sizes regularly oscillate over time in a natural vertebrate population.

  8. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  9. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  10. Random selection of items. Selection of n1 samples among N items composing a stratum

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1987-02-01

    STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs

  11. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  13. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  14. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  15. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  16. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  17. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  18. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  19. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  20. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  1. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  2. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  3. Unravelling anisogamy: egg size and ejaculate size mediate selection on morphology in free-swimming sperm.

    Science.gov (United States)

    Monro, Keyne; Marshall, Dustin J

    2016-07-13

    Gamete dimorphism (anisogamy) defines the sexes in most multicellular organisms. Theoretical explanations for its maintenance usually emphasize the size-related selection pressures of sperm competition and zygote survival, assuming that fertilization of all eggs precludes selection for phenotypes that enhance fertility. In external fertilizers, however, fertilization is often incomplete due to sperm limitation, and the risk of polyspermy weakens the advantage of high sperm numbers that is predicted to limit sperm size, allowing alternative selection pressures to target free-swimming sperm. We asked whether egg size and ejaculate size mediate selection on the free-swimming sperm of Galeolaria caespitosa, a marine tubeworm with external fertilization, by comparing relationships between sperm morphology and male fertility across manipulations of egg size and sperm density. Our results suggest that selection pressures exerted by these factors may aid the maintenance of anisogamy in external fertilizers by limiting the adaptive value of larger sperm in the absence of competition. In doing so, our study offers a more complete explanation for the stability of anisogamy across the range of sperm environments typical of this mating system and identifies new potential for the sexes to coevolve via mutual selection pressures exerted by gametes at fertilization. © 2016 The Author(s).

  4. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  5. Selection dramatically reduces effective population size in HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Mittler John E

    2008-05-01

    Full Text Available Abstract Background In HIV-1 evolution, a 100–100,000 fold discrepancy between census size and effective population size (Ne has been noted. Although it is well known that selection can reduce Ne, high in vivo mutation and recombination rates complicate attempts to quantify the effects of selection on HIV-1 effective size. Results We use the inbreeding coefficient and the variance in allele frequency at a linked neutral locus to estimate the reduction in Ne due to selection in the presence of mutation and recombination. With biologically realistic mutation rates, the reduction in Ne due to selection is determined by the strength of selection, i.e., the stronger the selection, the greater the reduction. However, the dependence of Ne on selection can break down if recombination rates are very high (e.g., r ≥ 0.1. With biologically likely recombination rates, our model suggests that recurrent selective sweeps similar to those observed in vivo can reduce within-host HIV-1 effective population sizes by a factor of 300 or more. Conclusion Although other factors, such as unequal viral reproduction rates and limited migration between tissue compartments contribute to reductions in Ne, our model suggests that recurrent selection plays a significant role in reducing HIV-1 effective population sizes in vivo.

  6. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  7. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  8. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    Science.gov (United States)

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  9. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  10. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  11. 40 CFR 89.507 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing § 89.507 Sample selection. (a) Engines comprising a test sample will be selected at the location...). However, once the manufacturer ships any test engine, it relinquishes the prerogative to conduct retests...

  12. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  13. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Size dependent magnetism of mass selected deposited transition metal clusters

    International Nuclear Information System (INIS)

    Lau, T.

    2002-05-01

    The size dependent magnetic properties of small iron clusters deposited on ultrathin Ni/Cu(100) films have been studied with circularly polarised synchrotron radiation. For X-ray magnetic circular dichroism studies, the magnetic moments of size selected clusters were aligned perpendicular to the sample surface. Exchange coupling of the clusters to the ultrathin Ni/Cu(100) film determines the orientation of their magnetic moments. All clusters are coupled ferromagnetically to the underlayer. With the use of sum rules, orbital and spin magnetic moments as well as their ratios have been extracted from X-ray magnetic circular dichroism spectra. The ratio of orbital to spin magnetic moments varies considerably as a function of cluster size, reflecting the dependence of magnetic properties on cluster size and geometry. These variations can be explained in terms of a strongly size dependent orbital moment. Both orbital and spin magnetic moments are significantly enhanced in small clusters as compared to bulk iron, although this effect is more pronounced for the spin moment. Magnetic properties of deposited clusters are governed by the interplay of cluster specific properties on the one hand and cluster-substrate interactions on the other hand. Size dependent variations of magnetic moments are modified upon contact with the substrate. (orig.)

  15. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  17. 40 CFR 90.507 - Sample selection.

    Science.gov (United States)

    2010-07-01

    ... Auditing § 90.507 Sample selection. (a) Engines comprising a test sample will be selected at the location... manufacturer ships any test engine, it relinquishes the prerogative to conduct retests as provided in § 90.508...

  18. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  19. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  20. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    Science.gov (United States)

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  1. Equal Susceptibility and Size-selective Mobility in Aeolian Saltation

    Science.gov (United States)

    Martin, R. L.; Kok, J. F.

    2017-12-01

    Natural wind-eroded soils generally contain a mixture of particle sizes. However, models for aeolian saltation are typically derived for sediment bed surfaces containing only a single particle size. To treat natural mixed beds, models for saltation and associated dust aerosol emission have typically simplified aeolian transport either as a series of non-interacting single particle size beds or as a bed containing only the median or mean particle size. Here, we test these common assumptions underpinning aeolian transport models using measurements of size-resolved saltation fluxes at three natural field sites. We find that a wide range of sand size classes experience "equal susceptibility" to saltation at a single common threshold wind shear stress, contrary to the "selective susceptibility" expected for treatment of a mixed bed as multiple single particle size beds. Furthermore, we observe strong size-selectivity in the mobility of different particle sizes, which is not adequately accounted for in current models. At all field sites, mobility is enhanced for particles that are 0.4-0.8 times the median bed particle diameter, while mobility declines rapidly with increasing particle size above this range. We further observe that the most mobile particles also experience the largest saltation heights, which helps to explain variations in size-selective mobility. These observations refute the common simplification of saltation as a series of non-interacting single particle sizes. Sand transport and dust emission models that use this incorrect assumption can be both simplified and improved by instead using a single particle size representative of the mixed bed.

  2. AHP-Based Optimal Selection of Garment Sizes for Online Shopping

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Garment online shopping has been accepted by more and more consumers in recent years. In online shopping, a buyer only chooses the garment size judged by his own experience without trying-on, so the selected garment may not be the fittest one for the buyer due to the variety of body's figures. Thus, we propose a method of optimal selection of garment sizes for online shopping based on Analytic Hierarchy Process (AHP). The hierarchical structure model for optimal selection of garment sizes is structured and the fittest garment for a buyer is found by calculating the matching degrees between individual's measurements and the corresponding key-part values of ready-to-wear clothing sizes. In order to demonstrate its feasibility, we provide an example of selecting the fittest sizes of men's bottom. The result shows that the proposed method is useful in online clothing sales application.

  3. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  4. Portion distortion: typical portion sizes selected by young adults.

    Science.gov (United States)

    Schwartz, Jaime; Byrd-Bredbenner, Carol

    2006-09-01

    The incidence of obesity has increased in parallel with increasing portion sizes of individually packaged and ready-to-eat prepared foods as well as foods served at restaurants. Portion distortion (perceiving large portion sizes as appropriate amounts to eat at a single eating occasion) may contribute to increasing energy intakes and expanding waistlines. The purpose of this study was to determine typical portion sizes that young adults select, how typical portion sizes compare with reference portion sizes (based in this study on the Nutrition Labeling and Education Act's quantities of food customarily eaten per eating occasion), and whether the size of typical portions has changed over time. Young adults (n=177, 75% female, age range 16 to 26 years) at a major northeastern university. Participants served themselves typical portion sizes of eight foods at breakfast (n=63) or six foods at lunch or dinner (n=62, n=52, respectively). Typical portion-size selections were unobtrusively weighed. A unit score was calculated by awarding 1 point for each food with a typical portion size that was within 25% larger or smaller than the reference portion; larger or smaller portions were given 0 points. Thus, each participant's unit score could range from 0 to 8 at breakfast or 0 to 6 at lunch and dinner. Analysis of variance or t tests were used to determine whether typical and reference portion sizes differed, and whether typical portion sizes changed over time. Mean unit scores (+/-standard deviation) were 3.63+/-1.27 and 1.89+/-1.14, for breakfast and lunch/dinner, respectively, indicating little agreement between typical and reference portion sizes. Typical portions sizes in this study tended to be significantly different from those selected by young adults in a similar study conducted 2 decades ago. Portion distortion seems to affect the portion sizes selected by young adults for some foods. This phenomenon has the potential to hinder weight loss, weight maintenance, and

  5. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  6. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  7. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  8. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  9. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  10. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  11. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  12. When bigger is not better: selection against large size, high condition and fast growth in juvenile lemon sharks.

    Science.gov (United States)

    Dibattista, J D; Feldheim, K A; Gruber, S H; Hendry, A P

    2007-01-01

    Selection acting on large marine vertebrates may be qualitatively different from that acting on terrestrial or freshwater organisms, but logistical constraints have thus far precluded selection estimates for the former. We overcame these constraints by exhaustively sampling and repeatedly recapturing individuals in six cohorts of juvenile lemon sharks (450 age-0 and 255 age-1 fish) at an enclosed nursery site (Bimini, Bahamas). Data on individual size, condition factor, growth rate and inter-annual survival were used to test the 'bigger is better', 'fatter is better' and 'faster is better' hypotheses of life-history theory. For age-0 sharks, selection on all measured traits was weak, and generally acted against large size and high condition. For age-1 sharks, selection was much stronger, and consistently acted against large size and fast growth. These results suggest that selective pressures at Bimini may be constraining the evolution of large size and fast growth, an observation that fits well with the observed small size and low growth rate of juveniles at this site. Our results support those of some other recent studies in suggesting that bigger/fatter/faster is not always better, and may often be worse.

  13. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  14. Persistent directional selection on body size and a resolution to the paradox of stasis.

    Science.gov (United States)

    Rollinson, Njal; Rowe, Locke

    2015-09-01

    Directional selection on size is common but often fails to result in microevolution in the wild. Similarly, macroevolutionary rates in size are low relative to the observed strength of selection in nature. We show that many estimates of selection on size have been measured on juveniles, not adults. Further, parents influence juvenile size by adjusting investment per offspring. In light of these observations, we help resolve this paradox by suggesting that the observed upward selection on size is balanced by selection against investment per offspring, resulting in little or no net selection gradient on size. We find that trade-offs between fecundity and juvenile size are common, consistent with the notion of selection against investment per offspring. We also find that median directional selection on size is positive for juveniles but no net directional selection exists for adult size. This is expected because parent-offspring conflict exists over size, and juvenile size is more strongly affected by investment per offspring than adult size. These findings provide qualitative support for the hypothesis that upward selection on size is balanced by selection against investment per offspring, where parent-offspring conflict over size is embodied in the opposing signs of the two selection gradients. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  15. BODY SIZE AND HAREM SIZE IN MALE RED-WINGED BLACKBIRDS: MANIPULATING SELECTION WITH SEX-SPECIFIC FEEDERS.

    Science.gov (United States)

    Rohwer, Sievert; Langston, Nancy; Gori, Dave

    1996-10-01

    We experimentally manipulated the strength of selection in the field on red-winged blackbirds (Agelaius phoeniceus) to test hypotheses about contrasting selective forces that favor either large or small males in sexually size dimorphic birds. Selander (1972) argued that sexual selection favors larger males, while survival selection eventually stabilizes male size because larger males do not survive as well as smaller males during harsh winters. Searcy (1979a) proposed instead that sexual selection may be self limiting: male size might be stabilized not by overwinter mortality, but by breeding-season sexual selection that favors smaller males. Under conditions of energetic stress, smaller males should be able to display more and thus achieve higher reproductive success. Using feeders that provisioned males or females but not both, we produced conditions that mimicked the extremes of natural conditions. We found experimental support for the hypothesis that when food is abundant, sexual selection favors larger males. But even under conditions of severe energetic stress, smaller males did not gain larger harems, as the self-limiting hypothesis predicted. Larger males were more energetically stressed than smaller males, but in ways that affected their future reproductive output rather than their current reproductive performance. Stressed males that returned had smaller wings and tails than those that did not return; among returning stressed males, relative harem sizes were inversely related to wing and tail length. Thus, male body size may be stabilized not by survival costs during the non-breeding season, nor by energetic costs during the breeding season, but by costs of future reproduction that larger males pay for their increased breeding-season effort. © 1996 The Society for the Study of Evolution.

  16. Evolution of brain region volumes during artificial selection for relative brain size.

    Science.gov (United States)

    Kotrschal, Alexander; Zeng, Hong-Li; van der Bijl, Wouter; Öhman-Mägi, Caroline; Kotrschal, Kurt; Pelckmans, Kristiaan; Kolm, Niclas

    2017-12-01

    The vertebrate brain shows an extremely conserved layout across taxa. Still, the relative sizes of separate brain regions vary markedly between species. One interesting pattern is that larger brains seem associated with increased relative sizes only of certain brain regions, for instance telencephalon and cerebellum. Till now, the evolutionary association between separate brain regions and overall brain size is based on comparative evidence and remains experimentally untested. Here, we test the evolutionary response of brain regions to directional selection on brain size in guppies (Poecilia reticulata) selected for large and small relative brain size. In these animals, artificial selection led to a fast response in relative brain size, while body size remained unchanged. We use microcomputer tomography to investigate how the volumes of 11 main brain regions respond to selection for larger versus smaller brains. We found no differences in relative brain region volumes between large- and small-brained animals and only minor sex-specific variation. Also, selection did not change allometric scaling between brain and brain region sizes. Our results suggest that brain regions respond similarly to strong directional selection on relative brain size, which indicates that brain anatomy variation in contemporary species most likely stem from direct selection on key regions. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  17. Sample Selection for Training Cascade Detectors.

    Science.gov (United States)

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  18. Prey size selection and cannibalistic behaviour of juvenile barramundi Lates calcarifer.

    Science.gov (United States)

    Ribeiro, F F; Qin, J G

    2015-05-01

    This study assessed the cannibalistic behaviour of juvenile barramundi Lates calcarifer and examined the relationship between prey size selection and energy gain of cannibals. Prey handling time and capture success by cannibals were used to estimate the ratio of energy gain to energy cost in prey selection. Cannibals selected smaller prey despite its capability of ingesting larger prey individuals. In behavioural analysis, prey handling time significantly increased with prey size, but it was not significantly affected by cannibal size. Conversely, capture success significantly decreased with the increase of both prey and cannibal sizes. The profitability indices showed that the smaller prey provides the most energy return for cannibals of all size classes. These results indicate that L. calcarifer cannibals select smaller prey for more profitable return. The behavioural analysis, however, indicates that L. calcarifer cannibals attack prey of all size at a similar rate but ingest smaller prey more often, suggesting that prey size selection is passively orientated rather than at the predator's choice. The increase of prey escape ability and morphological constraint contribute to the reduction of intracohort cannibalism as fish grow larger. This study contributes to the understanding of intracohort cannibalism and development of strategies to reduce fish cannibalistic mortalities. © 2015 The Fisheries Society of the British Isles.

  19. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  20. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  1. Deposition of Size-Selected Cu Nanoparticles by Inert Gas Condensation

    Directory of Open Access Journals (Sweden)

    Martínez E

    2009-01-01

    Full Text Available Abstract Nanometer size-selected Cu clusters in the size range of 1–5 nm have been produced by a plasma-gas-condensation-type cluster deposition apparatus, which combines a grow-discharge sputtering with an inert gas condensation technique. With this method, by controlling the experimental conditions, it was possible to produce nanoparticles with a strict control in size. The structure and size of Cu nanoparticles were determined by mass spectroscopy and confirmed by atomic force microscopy (AFM and scanning electron transmission microscopy (STEM measurements. In order to preserve the structural and morphological properties, the energy of cluster impact was controlled; the energy of acceleration of the nanoparticles was in near values at 0.1 ev/atom for being in soft landing regime. From SEM measurements developed in STEM-HAADF mode, we found that nanoparticles are near sized to those values fixed experimentally also confirmed by AFM observations. The results are relevant, since it demonstrates that proper optimization of operation conditions can lead to desired cluster sizes as well as desired cluster size distributions. It was also demonstrated the efficiency of the method to obtain size-selected Cu clusters films, as a random stacking of nanometer-size crystallites assembly. The deposition of size-selected metal clusters represents a novel method of preparing Cu nanostructures, with high potential in optical and catalytic applications.

  2. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Species selectivity in different sized topless trawl designs: Does size matter?

    DEFF Research Database (Denmark)

    Krag, Ludvig Ahm; Herrmann, Bent; Karlsen, Junita Diana

    2015-01-01

    -specific quotas. The toplesstrawl design was developed to improve species-specific selectivity in such fisheries. In a topless trawl,the foot rope is located more forward than the headline to allow fish to escape upwards, whereas theheadline is located in front in traditional trawl designs. In this study we...... Atlantic, topless trawls have been introducedas legal cod-selective trawl designs. However, this study demonstrates that identical gear modificationsmade to similar trawls of different sizes and used in the same fishery can lead to different results....

  4. Sample Selection for Training Cascade Detectors.

    Directory of Open Access Journals (Sweden)

    Noelia Vállez

    Full Text Available Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  5. Seed predators exert selection on the subindividual variation of seed size.

    Science.gov (United States)

    Sobral, M; Guitián, J; Guitián, P; Larrinaga, A R

    2014-07-01

    Subindividual variation among repeated organs in plants constitutes an overlooked level of variation in phenotypic selection studies, despite being a major component of phenotypic variation. Animals that interact with plants could be selective agents on subindividual variation. This study examines selective pressures exerted during post-dispersal seed predation and germination on the subindividual variation of seed size in hawthorn (Crataegus monogyna). With a seed offering experiment and a germination test, we estimated phenotypic selection differentials for average and subindividual variation of seed size due to seed predation and germination. Seed size affects germination, growth rate and the probability of an individual seed of escaping predation. Longer seeds showed higher germination rates, but this did not result in significant selection on phenotypes of the maternal trees. On the other hand, seed predators avoided wider seeds, and by doing so exerted phenotypic selection on adult average and subindividual variation of seed size. The detected selection on subindividual variation suggests that the levels of phenotypic variation within individual plants may be, at least partly, the adaptive consequence of animal-mediated selection. © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands.

  6. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  7. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  8. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  9. A change in competitive context reverses sexual selection on male size.

    Science.gov (United States)

    Kasumovic, M M; Andrade, M C B

    2009-02-01

    In studies of sexual selection, larger size is often argued to increase male fitness, and relatively smaller males are explained by genetic and/or environmental variation. We demonstrate that a size-development life-history trade-off could underlie the maintenance of a broad, unimodal distribution of size in male redback spiders (Latrodectus hasselti). Larger males are superior in direct competition, but redback males mature rapidly at small size in the presence of females. In field enclosures, we simulated two competitive contexts favouring development of divergent male sizes. Relatively smaller males lost when competing directly, but had 10 times higher fitness than relatively larger males when given the temporal advantage of rapid development. Linear selection gradients confirmed the reversal of selection on size, showing that it is critical to consider life-history decisions underlying the development of traits related to fitness.

  10. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  11. The influence of sampling unit size and spatial arrangement patterns on neighborhood-based spatial structure analyses of forest stands

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.; Zhang, G.; Hui, G.; Li, Y.; Hu, Y.; Zhao, Z.

    2016-07-01

    Aim of study: Neighborhood-based stand spatial structure parameters can quantify and characterize forest spatial structure effectively. How these neighborhood-based structure parameters are influenced by the selection of different numbers of nearest-neighbor trees is unclear, and there is some disagreement in the literature regarding the appropriate number of nearest-neighbor trees to sample around reference trees. Understanding how to efficiently characterize forest structure is critical for forest management. Area of study: Multi-species uneven-aged forests of Northern China. Material and methods: We simulated stands with different spatial structural characteristics and systematically compared their structure parameters when two to eight neighboring trees were selected. Main results: Results showed that values of uniform angle index calculated in the same stand were different with different sizes of structure unit. When tree species and sizes were completely randomly interspersed, different numbers of neighbors had little influence on mingling and dominance indices. Changes of mingling or dominance indices caused by different numbers of neighbors occurred when the tree species or size classes were not randomly interspersed and their changing characteristics can be detected according to the spatial arrangement patterns of tree species and sizes. Research highlights: The number of neighboring trees selected for analyzing stand spatial structure parameters should be fixed. We proposed that the four-tree structure unit is the best compromise between sampling accuracy and costs for practical forest management. (Author)

  12. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  13. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  14. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  15. Antarctic krill; assessment of mesh size selectivity and escape mortality from trawls

    DEFF Research Database (Denmark)

    Krafft, Bjørn A.; Krag, Ludvig Ahm; Herrmann, Bent

    2015-01-01

    Marine AS. The project will examine krill escape mortality from the codend during a full scale field experiment, model size selectivity and escape mortality in codends including different designs and assess the size selectivity in the trawl body forward of the codend. Based on end results from the preceding...... examinations we will be able to predict size selectivity and escape mortality from the entire trawl body with the appurtenant mortality for different trawl designs......This working paper presents the aims and methodology for a three-year-project (commenced in 2015) assessing size selectivity and escape mortality of Antarctic krill from trawl nets. The project is widely based on acquired experiences from a completed study Net Escapement of Antarctic krill...

  16. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  18. Evidence of size-selective evolution in the fighting conch from prehistoric subsistence harvesting.

    Science.gov (United States)

    O'Dea, Aaron; Shaffer, Marian Lynne; Doughty, Douglas R; Wake, Thomas A; Rodriguez, Felix A

    2014-05-07

    Intensive size-selective harvesting can drive evolution of sexual maturity at smaller body size. Conversely, prehistoric, low-intensity subsistence harvesting is not considered an effective agent of size-selective evolution. Uniting archaeological, palaeontological and contemporary material, we show that size at sexual maturity in the edible conch Strombus pugilis declined significantly from pre-human (approx. 7 ka) to prehistoric times (approx. 1 ka) and again to the present day. Size at maturity also fell from early- to late-prehistoric periods, synchronous with an increase in harvesting intensity as other resources became depleted. A consequence of declining size at maturity is that early prehistoric harvesters would have received two-thirds more meat per conch than contemporary harvesters. After exploring the potential effects of selection biases, demographic shifts, environmental change and habitat alteration, these observations collectively implicate prehistoric subsistence harvesting as an agent of size-selective evolution with long-term detrimental consequences. We observe that contemporary populations that are protected from harvesting are slightly larger at maturity, suggesting that halting or even reversing thousands of years of size-selective evolution may be possible.

  19. Sexual selection and the evolution of brain size in primates.

    Science.gov (United States)

    Schillaci, Michael A

    2006-12-20

    Reproductive competition among males has long been considered a powerful force in the evolution of primates. The evolution of brain size and complexity in the Order Primates has been widely regarded as the hallmark of primate evolutionary history. Despite their importance to our understanding of primate evolution, the relationship between sexual selection and the evolutionary development of brain size is not well studied. The present research examines the evolutionary relationship between brain size and two components of primate sexual selection, sperm competition and male competition for mates. Results indicate that there is not a significant relationship between relative brain size and sperm competition as measured by relative testis size in primates, suggesting sperm competition has not played an important role in the evolution of brain size in the primate order. There is, however, a significant negative evolutionary relationship between relative brain size and the level of male competition for mates. The present study shows that the largest relative brain sizes among primate species are associated with monogamous mating systems, suggesting primate monogamy may require greater social acuity and abilities of deception.

  20. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  1. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  2. Body size, swimming speed, or thermal sensitivity? Predator-imposed selection on amphibian larvae.

    Science.gov (United States)

    Gvoždík, Lumír; Smolinský, Radovan

    2015-11-02

    Many animals rely on their escape performance during predator encounters. Because of its dependence on body size and temperature, escape velocity is fully characterized by three measures, absolute value, size-corrected value, and its response to temperature (thermal sensitivity). The primary target of the selection imposed by predators is poorly understood. We examined predator (dragonfly larva)-imposed selection on prey (newt larvae) body size and characteristics of escape velocity using replicated and controlled predation experiments under seminatural conditions. Specifically, because these species experience a wide range of temperatures throughout their larval phases, we predict that larvae achieving high swimming velocities across temperatures will have a selective advantage over more thermally sensitive individuals. Nonzero selection differentials indicated that predators selected for prey body size and both absolute and size-corrected maximum swimming velocity. Comparison of selection differentials with control confirmed selection only on body size, i.e., dragonfly larvae preferably preyed on small newt larvae. Maximum swimming velocity and its thermal sensitivity showed low group repeatability, which contributed to non-detectable selection on both characteristics of escape performance. In the newt-dragonfly larvae interaction, body size plays a more important role than maximum values and thermal sensitivity of swimming velocity during predator escape. This corroborates the general importance of body size in predator-prey interactions. The absence of an appropriate control in predation experiments may lead to potentially misleading conclusions about the primary target of predator-imposed selection. Insights from predation experiments contribute to our understanding of the link between performance and fitness, and further improve mechanistic models of predator-prey interactions and food web dynamics.

  3. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  4. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Zooplankton size selection relative to gill raker spacing in rainbow trout

    Science.gov (United States)

    Budy, P.; Haddix, T.; Schneidervin, R.

    2005-01-01

    Rainbow trout Oncorhynchus mykiss are one of the most widely stocked salmonids worldwide, often based on the assumption that they will effectively utilize abundant invertebrate food resources. We evaluated the potential for feeding morphology to affect prey selection by rainbow trout using a combination of laboratory feeding experiments and field observations in Flaming Gorge Reservoir, Utah-Wyoming. For rainbow trout collected from the reservoir, inter-gill raker spacing averaged 1.09 mm and there was low variation among fish overall (SD = 0.28). Ninety-seven percent of all zooplankton observed in the diets of rainbow trout collected in the reservoir were larger than the interraker spacing, while only 29% of the zooplankton found in the environment were larger than the interraker spacing. Over the size range of rainbow trout evaluated here (200-475 mm), interraker spacing increased moderately with increasing fish length; however, the size of zooplankton found in the diet did not increase with increasing fish length. In laboratory experiments, rainbow trout consumed the largest zooplankton available; the mean size of zooplankton observed in the diets was significantly larger than the mean size of zooplankton available. Electivity indices for both laboratory and field observations indicated strong selection for larger-sized zooplankton. The size threshold at which electivity switched from selection against smaller-sized zooplankton to selection for larger-sized zooplankton closely corresponded to the mean interraker spacing for both groups (???1-1.2 mm). The combination of results observed here indicates that rainbow trout morphology limits the retention of different-sized zooplankton prey and reinforces the importance of understanding how effectively rainbow trout can utilize the type and sizes of different prey available in a given system. These considerations may improve our ability to predict the potential for growth and survival of rainbow trout within and

  6. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  7. Sexual selection and the evolution of brain size in primates.

    Directory of Open Access Journals (Sweden)

    Michael A Schillaci

    Full Text Available Reproductive competition among males has long been considered a powerful force in the evolution of primates. The evolution of brain size and complexity in the Order Primates has been widely regarded as the hallmark of primate evolutionary history. Despite their importance to our understanding of primate evolution, the relationship between sexual selection and the evolutionary development of brain size is not well studied. The present research examines the evolutionary relationship between brain size and two components of primate sexual selection, sperm competition and male competition for mates. Results indicate that there is not a significant relationship between relative brain size and sperm competition as measured by relative testis size in primates, suggesting sperm competition has not played an important role in the evolution of brain size in the primate order. There is, however, a significant negative evolutionary relationship between relative brain size and the level of male competition for mates. The present study shows that the largest relative brain sizes among primate species are associated with monogamous mating systems, suggesting primate monogamy may require greater social acuity and abilities of deception.

  8. A contemporary decennial global sample of changing agricultural field sizes

    Science.gov (United States)

    White, E.; Roy, D. P.

    2011-12-01

    In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.

  9. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  10. Social polyandry, parental investment, sexual selection, and evolution of reduced female gamete size.

    Science.gov (United States)

    Andersson, Malte

    2004-01-01

    Sexual selection in the form of sperm competition is a major explanation for small size of male gametes. Can sexual selection in polyandrous species with reversed sex roles also lead to reduced female gamete size? Comparative studies show that egg size in birds tends to decrease as a lineage evolves social polyandry. Here, a quantitative genetic model predicts that female scrambles over mates lead to evolution of reduced female gamete size. Increased female mating success drives the evolution of smaller eggs, which take less time to produce, until balanced by lowered offspring survival. Mean egg size is usually reduced and polyandry increased by increasing sex ratio (male bias) and maximum possible number of mates. Polyandry also increases with the asynchrony (variance) in female breeding start. Opportunity for sexual selection increases with the maximum number of mates but decreases with increasing sex ratio. It is well known that parental investment can affect sexual selection. The model suggests that the influence is mutual: owing to a coevolutionary feedback loop, sexual selection in females also shapes initial parental investment by reducing egg size. Feedback between sexual selection and parental investment may be common.

  11. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  12. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  13. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  14. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  15. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  16. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  17. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  18. The size selectivity of the main body of a sampling pelagic pair trawl in freshwater reservoirs during the night

    Czech Academy of Sciences Publication Activity Database

    Říha, Milan; Jůza, Tomáš; Prchalová, Marie; Mrkvička, Tomáš; Čech, Martin; Draštík, Vladislav; Muška, Milan; Kratochvíl, Michal; Peterka, Jiří; Tušer, Michal; Vašek, Mojmír; Kubečka, Jan

    2012-01-01

    Roč. 127, September (2012), s. 56-60 ISSN 0165-7836 R&D Projects: GA MZe(CZ) QH81046 Institutional support: RVO:60077344 Keywords : quantitative sampling * gear selectivity * trawl * reservoirs Subject RIV: GL - Fishing Impact factor: 1.695, year: 2012

  19. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  20. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  1. Self-selection in size and structure in argon clusters formed on amorphous carbon

    Energy Technology Data Exchange (ETDEWEB)

    Krainyukova, Nina V.; Waal, Benjamin W. van de

    2004-07-01

    Argon clusters formed on an amorphous carbon substrate as deposited from the vapor phase were studied by means of transmission high energy electron diffraction using the liquid helium cryostat. Electron diffractograms were analysed on the basis of assumption that there exist a cluster size distribution in samples formed on substrate and multi-shell structures such as icosahedra, decahedra, fcc and hcp were probed for different sizes up to {approx}15 000 atoms. The experimental data were considered as a result of a superposition of diffracted intensities from clusters of different sizes and structures. The comparative analysis was based on the R-factor minimization that was found to be equal to 0.014 for the best fit between experiment and modelling. The total size and structure distribution function shows the presence of 'non-crystallographic' structures such as icosahedra and decahedra with five-fold symmetry that was found to prevail and a smaller amount of fcc and hcp structures. Possible growth mechanisms as well as observed general tendency to self-selection in sizes and structures are presumably governed by confined pore-like geometry in an amorphous carbon substrate.

  2. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  3. Linear and nonlinear surface spectroscopy of supported size selected metal clusters and organic adsorbates

    Energy Technology Data Exchange (ETDEWEB)

    Thaemer, Martin Georg

    2012-03-08

    The spectroscopic investigation of supported size selected metal clusters over a wide wavelength range plays an important role for understanding their outstanding catalytic properties. The challenge which must be overcome to perform such measurements is the difficult detection of the weak spectroscopic signals from these samples. As a consequence, highly sensitive spectroscopic methods are applied, such as surface Cavity Ringdown Spectroscopy and surface Second Harmonic Generation Spectroscopy. The spectroscopic apparatus developed is shown to have a sensitivity which is high enough to detect sub-monolayer coverages of adsorbates on surfaces. In the measured spectra of small supported silver clusters of the sizes Ag{sub 4}2, Ag{sub 2}1, Ag{sub 9}, and Ag atoms a stepwise transition from particles with purely metallic character to particles with molecule-like properties can be observed within this size range.

  4. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  5. Linking seasonal home range size with habitat selection and movement in a mountain ungulate.

    Science.gov (United States)

    Viana, Duarte S; Granados, José Enrique; Fandos, Paulino; Pérez, Jesús M; Cano-Manuel, Francisco Javier; Burón, Daniel; Fandos, Guillermo; Aguado, María Ángeles Párraga; Figuerola, Jordi; Soriguer, Ramón C

    2018-01-01

    Space use by animals is determined by the interplay between movement and the environment, and is thus mediated by habitat selection, biotic interactions and intrinsic factors of moving individuals. These processes ultimately determine home range size, but their relative contributions and dynamic nature remain less explored. We investigated the role of habitat selection, movement unrelated to habitat selection and intrinsic factors related to sex in driving space use and home range size in Iberian ibex, Capra pyrenaica . We used GPS collars to track ibex across the year in two different geographical areas of Sierra Nevada, Spain, and measured habitat variables related to forage and roost availability. By using integrated step selection analysis (iSSA), we show that habitat selection was important to explain space use by ibex. As a consequence, movement was constrained by habitat selection, as observed displacement rate was shorter than expected under null selection. Selection-independent movement, selection strength and resource availability were important drivers of seasonal home range size. Both displacement rate and directional persistence had a positive relationship with home range size while accounting for habitat selection, suggesting that individual characteristics and state may also affect home range size. Ibex living at higher altitudes, where resource availability shows stronger altitudinal gradients across the year, had larger home ranges. Home range size was larger in spring and autumn, when ibex ascend and descend back, and smaller in summer and winter, when resources are more stable. Therefore, home range size decreased with resource availability. Finally, males had larger home ranges than females, which might be explained by differences in body size and reproductive behaviour. Movement, selection strength, resource availability and intrinsic factors related to sex determined home range size of Iberian ibex. Our results highlight the need to integrate

  6. Evidence of size-selective evolution in the fighting conch from prehistoric subsistence harvesting

    OpenAIRE

    O'Dea, Aaron; Shaffer, Marian Lynne; Doughty, Douglas R.; Wake, Thomas A.; Rodriguez, Felix A.

    2014-01-01

    Intensive size-selective harvesting can drive evolution of sexual maturity at smaller body size. Conversely, prehistoric, low-intensity subsistence harvesting is not considered an effective agent of size-selective evolution. Uniting archaeological, palaeontological and contemporary material, we show that size at sexual maturity in the edible conch Strombus pugilis declined significantly from pre-human (approx. 7 ka) to prehistoric times (approx. 1 ka) and again to the present day. Size at mat...

  7. Modelling and simulation of size selectivity in diamond mesh trawl cod-ends

    DEFF Research Database (Denmark)

    Herrmann, Bent

    of the fishing gear. The cod-end is the rearmost part of a trawl where catch accumulates and in which most of the size selection is known to take place. To date, the main method used to assess the selectivity of trawl cod-ends has been to run sea trials followed by statistical analysis of the obtained......Within many fisheries there is a widespread discard of fish. Furthermore, there are several fisheries where fish are caught before reaching the optimal size, leading to an adverse exploitation of the resources. One way to achieve a more optimal exploitation is to improve the size selectivity...

  8. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  9. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  10. Size-selective detection in integrated optical interferometric biosensors

    NARCIS (Netherlands)

    Mulder, Harmen K P; Ymeti, Aurel; Subramaniam, Vinod; Kanger, Johannes S

    2012-01-01

    We present a new size-selective detection method for integrated optical interferometric biosensors that can strongly enhance their performance. We demonstrate that by launching multiple wavelengths into a Young interferometer waveguide sensor it is feasible to derive refractive index changes from

  11. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  12. Portion Sizes from 24-Hour Dietary Recalls Differed by Sex among Those Who Selected the Same Portion Size Category on a Food Frequency Questionnaire.

    Science.gov (United States)

    Kang, Minji; Park, Song-Yi; Boushey, Carol J; Wilkens, Lynne R; Monroe, Kristine R; Le Marchand, Loïc; Kolonel, Laurence N; Murphy, Suzanne P; Paik, Hee-Young

    2018-05-08

    Accounting for sex differences in food portions may improve dietary measurement; however, this factor has not been well examined. The aim of this study was to examine sex differences in reported food portions from 24-hour dietary recalls (24HDRs) among those who selected the same portion size category on a quantitative food frequency questionnaire (QFFQ). This study was conducted with a cross-sectional design. Participants (n=319) were members of the Hawaii-Los Angeles Multiethnic Cohort who completed three 24HDRs and a QFFQ in a calibration study conducted in 2010 and 2011. Portions of individual foods reported from 24HDRs served as the outcome measures. Mean food portions from 24HDRs were compared between men and women who reported the same portion size on the QFFQ, after adjustment for race/ethnicity using a linear regression model. Actual amount and the assigned amount of the selected portion size in the QFFQ were compared using one-sample t test for men and women separately. Of 163 food items with portion size options listed in the QFFQ, 32 were reported in 24HDRs by ≥20 men and ≥20 women who selected the same portion size in the QFFQ. Although they chose the same portion size on the QFFQ, mean intake amounts from 24HDRs were significantly higher for men than for women for "beef/lamb/veal," "white rice," "brown/wild rice," "lettuce/tossed salad," "eggs cooked/raw," "whole wheat/rye bread," "buns/rolls," and "mayonnaise in sandwiches." For men, mean portions of 14 items from the 24HDRs were significantly different from the assigned amounts for QFFQ items (seven higher and seven lower), whereas for women, mean portions of 14 items were significantly lower from the assigned amounts (with five significantly higher). These sex differences in reported 24HDR food portions-even among participants who selected the same portion size on the QFFQ-suggest that the use of methods that account for differences in the portions consumed by men and women when QFFQs are

  13. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  14. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  15. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  16. What enables size-selective trophy hunting of wildlife?

    Directory of Open Access Journals (Sweden)

    Chris T Darimont

    Full Text Available Although rarely considered predators, wildlife hunters can function as important ecological and evolutionary agents. In part, their influence relates to targeting of large reproductive adults within prey populations. Despite known impacts of size-selective harvests, however, we know little about what enables hunters to kill these older, rarer, and presumably more wary individuals. In other mammalian predators, predatory performance varies with knowledge and physical condition, which accumulates and declines, respectively, with age. Moreover, some species evolved camouflage as a physical trait to aid in predatory performance. In this work, we tested whether knowledge-based faculty (use of a hunting guide with accumulated experience in specific areas, physical traits (relative body mass [RBM] and camouflage clothing, and age can predict predatory performance. We measured performance as do many hunters: size of killed cervid prey, using the number of antler tines as a proxy. Examining ∼ 4300 online photographs of hunters posing with carcasses, we found that only the presence of guides increased the odds of killing larger prey. Accounting for this effect, modest evidence suggested that unguided hunters presumably handicapped with the highest RBM actually had greater odds of killing large prey. There was no association with hunter age, perhaps because of our coarse measure (presence of grey hair and the performance trade-offs between knowledge accumulation and physical deterioration with age. Despite its prevalence among sampled hunters (80%, camouflage had no influence on size of killed prey. Should these patterns be representative of other areas and prey, and our interpretations correct, evolutionarily-enlightened harvest management might benefit from regulatory scrutiny on guided hunting. More broadly, we suggest that by being nutritionally and demographically de-coupled from prey and aided by efficient killing technology and road access

  17. Population genetics inference for longitudinally-sampled mutants under strong selection.

    Science.gov (United States)

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  18. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  19. 40 CFR 205.171-3 - Test motorcycle sample selection.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test motorcycle sample selection. 205... ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle Exhaust Systems § 205.171-3 Test motorcycle sample selection. A test motorcycle to be used for selective enforcement audit testing...

  20. Corridors of barchan dunes: Stability and size selection

    DEFF Research Database (Denmark)

    Hersen, P.; Andersen, Ken Haste; Elbelrhiti, H.

    2004-01-01

    state. Second, the propagation speed of dunes decreases with the size of the dune: this leads, through the collision process, to a coarsening of barchan fields. We show that these phenomena are not specific to the model, but result from general and robust mechanisms. The length scales needed...... for these instabilities to develop are derived and discussed. They turn out to be much smaller than the dune field length. As a conclusion, there should exist further, yet unknown, mechanisms regulating and selecting the size of dunes....

  1. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  2. Biomimetic supercontainers for size-selective electrochemical sensing of molecular ions

    Science.gov (United States)

    Netzer, Nathan L.; Must, Indrek; Qiao, Yupu; Zhang, Shi-Li; Wang, Zhenqiang; Zhang, Zhen

    2017-04-01

    New ionophores are essential for advancing the art of selective ion sensing. Metal-organic supercontainers (MOSCs), a new family of biomimetic coordination capsules designed using sulfonylcalix[4]arenes as container precursors, are known for their tunable molecular recognition capabilities towards an array of guests. Herein, we demonstrate the use of MOSCs as a new class of size-selective ionophores dedicated to electrochemical sensing of molecular ions. Specifically, a MOSC molecule with its cavities matching the size of methylene blue (MB+), a versatile organic molecule used for bio-recognition, was incorporated into a polymeric mixed-matrix membrane and used as an ion-selective electrode. This MOSC-incorporated electrode showed a near-Nernstian potentiometric response to MB+ in the nano- to micro-molar range. The exceptional size-selectivity was also evident through contrast studies. To demonstrate the practical utility of our approach, a simulated wastewater experiment was conducted using water from the Fyris River (Sweden). It not only showed a near-Nernstian response to MB+ but also revealed a possible method for potentiometric titration of the redox indicator. Our study thus represents a new paradigm for the rational design of ionophores that can rapidly and precisely monitor molecular ions relevant to environmental, biomedical, and other related areas.

  3. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  4. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  5. Pinning of size-selected gold and nickel nanoclusters on graphite

    NARCIS (Netherlands)

    Di Vece, M.|info:eu-repo/dai/nl/248753355; Paloma, S.; Palmer, R.E.

    2005-01-01

    Size-selected gold and nickel nanoclusters are of interest from an electronic, catalytic, and biological point of view. These applications require the deposition of the clusters on a surface, and a key challenge is to retain the cluster size. Here controlled energy impact is used to immobilize the

  6. The Effect of Sterilization on Size and Shape of Fat Globules in Model Processed Cheese Samples

    Directory of Open Access Journals (Sweden)

    B. Tremlová

    2006-01-01

    Full Text Available Model cheese samples from 4 independent productions were heat sterilized (117 °C, 20 minutes after the melting process and packing with an aim to prolong their durability. The objective of the study was to assess changes in the size and shape of fat globules due to heat sterilization by using image analysis methods. The study included a selection of suitable methods of preparation mounts, taking microphotographs and making overlays for automatic processing of photographs by image analyser, ascertaining parameters to determine the size and shape of fat globules and statistical analysis of results obtained. The results of the experiment suggest that changes in shape of fat globules due to heat sterilization are not unequivocal. We found that the size of fat globules was significantly increased (p < 0.01 due to heat sterilization (117 °C, 20 min, and the shares of small fat globules (up to 500 μm2, or 100 μm2 in the samples of heat sterilized processed cheese were decreased. The results imply that the image analysis method is very useful when assessing the effect of technological process on the quality of processed cheese quality.

  7. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  8. Magnetron sputtering cluster apparatus for formation and deposition of size-selected metal nanoparticles

    DEFF Research Database (Denmark)

    Hanif, Muhammad; Popok, Vladimir

    2015-01-01

    selection is achieved using an electrostatic quadrupole mass selector. The deposited silver clusters are studied using atomic force microscopy. The height distributions show typical relative standard size deviation of 9-13% for given sizes in the range between 5-23 nm. Thus, the apparatus demonstrates good...... capability in formation of supported size-selected metal nanoparticles with controllable coverage for various practical applications....

  9. Aerosol size characteristics in selected working areas

    International Nuclear Information System (INIS)

    Ahmed, K.

    1984-05-01

    This report presents the work done to study the aerosol activity size distributions and their respirable fractions in some selected areas of the Juelich Nuclear Research Center. Anderson cascade impactors were used to find the aerodynamic size ranges of the airborne particles for subsequent analysis of activity associated with each size group. The aerosols were found to follow in general log-normal distributions in the hot cells with values of AMAD between 5 and 10 μm. Measurements in the AVR containment and decontamination laboratory in Uranit GmbH showed deviations from log-normal distribution. In the waste press area the distribution is sometimes log-normal and sometimes not, depending upon the origin of waste. The values of AMAD are in the range of 2 to 4 μm in these areas. The respirable fractions were calculated using ACGIH definition for respirable dust to be < 25% in hot cells and < 60% in other areas. Pulmonary depositions according to ICRP model were < 10% and < 15% respectively. (orig./HP)

  10. Cold Spray Deposition of Freestanding Inconel Samples and Comparative Analysis with Selective Laser Melting

    Science.gov (United States)

    Bagherifard, Sara; Roscioli, Gianluca; Zuccoli, Maria Vittoria; Hadi, Mehdi; D'Elia, Gaetano; Demir, Ali Gökhan; Previtali, Barbara; Kondás, Ján; Guagliano, Mario

    2017-10-01

    Cold spray offers the possibility of obtaining almost zero-porosity buildups with no theoretical limit to the thickness. Moreover, cold spray can eliminate particle melting, evaporation, crystallization, grain growth, unwanted oxidation, undesirable phases and thermally induced tensile residual stresses. Such characteristics can boost its potential to be used as an additive manufacturing technique. Indeed, deposition via cold spray is recently finding its path toward fabrication of freeform components since it can address the common challenges of powder-bed additive manufacturing techniques including major size constraints, deposition rate limitations and high process temperature. Herein, we prepared nickel-based superalloy Inconel 718 samples with cold spray technique and compared them with similar samples fabricated by selective laser melting method. The samples fabricated using both methods were characterized in terms of mechanical strength, microstructural and porosity characteristics, Vickers microhardness and residual stresses distribution. Different heat treatment cycles were applied to the cold-sprayed samples in order to enhance their mechanical characteristics. The obtained data confirm that cold spray technique can be used as a complementary additive manufacturing method for fabrication of high-quality freestanding components where higher deposition rate, larger final size and lower fabrication temperatures are desired.

  11. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  12. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  13. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  14. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten Igel

    with respect to risk attitudes. Our design builds in explicit randomization on the incentives for participation. We show that there are significant sample selection effects on inferences about the extent of risk aversion, but that the effects of subsequent sample attrition are minimal. Ignoring sample...... selection leads to inferences that subjects in the population are more risk averse than they actually are. Correcting for sample selection and attrition affects utility curvature, but does not affect inferences about probability weighting. Properly accounting for sample selection and attrition effects leads...... to findings of temporal stability in overall risk aversion. However, that stability is around different levels of risk aversion than one might naively infer without the controls for sample selection and attrition we are able to implement. This evidence of “randomization bias” from sample selection...

  15. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  16. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  17. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  18. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  19. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  20. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  1. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  2. Size-Selective Oxidation of Aldehydes with Zeolite Encapsulated Gold Nanoparticles

    DEFF Research Database (Denmark)

    Højholt, Karen Thrane; Laursen, Anders Bo; Kegnæs, Søren

    2011-01-01

    Here, we report a synthesis and catalytic study of hybrid materials comprised of 1–3 nm sinter-stable Au nanoparticles in MFI-type zeolites. An optional post-treatment in aqua regia effectively remove Au from the external surfaces. The size-selective aerobic aldehyde oxidation verifies that the a......Here, we report a synthesis and catalytic study of hybrid materials comprised of 1–3 nm sinter-stable Au nanoparticles in MFI-type zeolites. An optional post-treatment in aqua regia effectively remove Au from the external surfaces. The size-selective aerobic aldehyde oxidation verifies...... that the active Au is accessible only through the zeolite micropores....

  3. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  4. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  5. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  6. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  7. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  8. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  9. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  10. Escape windows to improve the size selectivity in the Baltic cod trawl fishery

    DEFF Research Database (Denmark)

    Madsen, Niels; Holst, René; Foldager, L.

    2002-01-01

    A rapid decrease of the stock of Baltic cod (Gadus morhua) has provided the incentive to improve the size selectivity in the trawl fishery. Use of escape windows is permitted in the legislation to give means of improving the size selectivity of cod as an alternative to a traditional standard codend....... The history of the use of escape windows in the Baltic Sea cod fishery is reviewed. The present escape windows do not function optimally. The objective of this new experiment was to compare an improved design of escape window, which is placed in the upper panel, with that of standard codend. Three standard...... of the codend selectivity was formulated to analyse the results and determine the effects of codend type, mesh size and other recorded variables. L50 and SR increased significantly with the mesh size. L50 was significantly increased and SR significantly reduced for a window codend with the same window mesh size...

  11. The Strength and Drivers of Bird-Mediated Selection on Fruit Crop Size: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Facundo X. Palacio

    2018-02-01

    Full Text Available In seed-dispersal mutualisms, the number of fruit a plant displays is a key trait, as it acts as a signal for seed dispersers that entails fruit removal and exportation of reproductive units (fruit crop size hypothesis. Although this hypothesis has gained general acceptance, forces driving the shape and strength of natural selection exerted by birds on fruit crop size remains an unresolved matter. Here, we propose that ecological filters promoting high functional equivalence of interacting partners (similar functional roles translate into similar selection pressures on fruit crop size, enhancing selection strength on this trait. We performed a meta-analysis on 50 seed-dispersal systems to test the hypothesis that frugivorous birds exert positive selection pressure on fruit crop size, and to assess whether different factors expected to act as filters (fruit diameter, fruit type, fruiting season length, bird functional groups, and latitude influence phenotypic selection regimes on this trait. Birds promote larger fruit crop sizes as a general pattern in nature. Short fruiting seasons and a high proportion of species belonging to the same functional group showed higher selection strength on fruit crop size. Also, selection strength on fruit crop size increased for large-fruited species and toward the tropics. Our results support the hypothesis that fruit crop size represents a conspicuous signal advertising the amount of reward to visually driven interacting partners, and that both plant and bird traits, as well as environmental factors, drive selection strength on fruit display traits. Furthermore, our results suggest that the relationship among forces impinged by phenology and frugivore functional roles may be key to understand their evolutionary stability.

  12. Sexual selection on male size drives the evolution of male-biased sexual size dimorphism via the prolongation of male development.

    Science.gov (United States)

    Rohner, Patrick T; Blanckenhorn, Wolf U; Puniamoorthy, Nalini

    2016-06-01

    Sexual size dimorphism (SSD) arises when the net effects of natural and sexual selection on body size differ between the sexes. Quantitative SSD variation between taxa is common, but directional intraspecific SSD reversals are rare. We combined micro- and macroevolutionary approaches to study geographic SSD variation in closely related black scavenger flies. Common garden experiments revealed stark intra- and interspecific variation: Sepsis biflexuosa is monomorphic across the Holarctic, while S. cynipsea (only in Europe) consistently exhibits female-biased SSD. Interestingly, S. neocynipsea displays contrasting SSD in Europe (females larger) and North America (males larger), a pattern opposite to the geographic reversal in SSD of S. punctum documented in a previous study. In accordance with the differential equilibrium model for the evolution of SSD, the intensity of sexual selection on male size varied between continents (weaker in Europe), whereas fecundity selection on female body size did not. Subsequent comparative analyses of 49 taxa documented at least six independent origins of male-biased SSD in Sepsidae, which is likely caused by sexual selection on male size and mediated by bimaturism. Therefore, reversals in SSD and the associated changes in larval development might be much more common and rapid and less constrained than currently assumed. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  13. Structure and Mechanical Properties of the AlSi10Mg Alloy Samples Manufactured by Selective Laser Melting

    Science.gov (United States)

    Li, Xiaodan; Ni, Jiaqiang; Zhu, Qingfeng; Su, Hang; Cui, Jianzhong; Zhang, Yifei; Li, Jianzhong

    2017-11-01

    The AlSi10Mg alloy samples with the size of 14×14×91mm were produced by the selective laser melting (SLM) method in different building direction. The structures and the properties at -70°C of the sample in different direction were investigated. The results show that the structure in different building direction shows different morphology. The fish scale structures distribute on the side along the building direction, and the oval structures distribute on the side vertical to the building direction. Some pores in with the maximum size of 100 μm exist of the structure. And there is no major influence for the build orientation on the tensile properties. The tensile strength and the elongation of the sample in the building direction are 340 Mpa and 11.2 % respectively. And the tensile strength and the elongation of the sample vertical to building direction are 350 Mpa and 13.4 % respectively

  14. THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mancini, C.; Renzini, A. [INAF-OAPD, Osservatorio Astronomico di Padova, Vicolo Osservatorio 5, I-35122 Padova (Italy); Foerster Schreiber, N. M.; Hicks, E. K. S.; Genzel, R.; Tacconi, L.; Davies, R. [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Cresci, G. [Osservatorio Astrofisico di Arcetri (OAF), INAF-Firenze, Largo E. Fermi 5, I-50125 Firenze (Italy); Peng, Y.; Lilly, S.; Carollo, M.; Oesch, P. [Institute of Astronomy, Department of Physics, Eidgenossische Technische Hochschule, ETH Zurich CH-8093 (Switzerland); Vergani, D.; Pozzetti, L.; Zamorani, G. [INAF-Bologna, Via Ranzani, I-40127 Bologna (Italy); Daddi, E. [CEA-Saclay, DSM/DAPNIA/Service d' Astrophysique, F-91191 Gif-Sur Yvette Cedex (France); Maraston, C. [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, PO1 3HE Portsmouth (United Kingdom); McCracken, H. J. [IAP, 98bis bd Arago, F-75014 Paris (France); Bouche, N. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Shapiro, K. [Aerospace Research Laboratories, Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278 (United States); and others

    2011-12-10

    The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age, etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.

  15. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    Energy Technology Data Exchange (ETDEWEB)

    Davarani, Saied Saeed Hosseiny, E-mail: ss-hosseiny@cc.sbu.ac.ir [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Rezayati zad, Zeinab [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Taheri, Ali Reza; Rahmatian, Nasrin [Islamic Azad University, Ilam Branch, Ilam (Iran, Islamic Republic of)

    2017-02-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ{sub max} 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L{sup ‐1} respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine

  16. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    International Nuclear Information System (INIS)

    Davarani, Saied Saeed Hosseiny; Rezayati zad, Zeinab; Taheri, Ali Reza; Rahmatian, Nasrin

    2017-01-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ max 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L ‐1 respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine-molecular imprinting

  17. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  18. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  19. Natural selection and inheritance of breeding time and clutch size in the collared flycatcher.

    Science.gov (United States)

    Sheldon, B C; Kruuk, L E B; Merilä, J

    2003-02-01

    Many characteristics of organisms in free-living populations appear to be under directional selection, possess additive genetic variance, and yet show no evolutionary response to selection. Avian breeding time and clutch size are often-cited examples of such characters. We report analyses of inheritance of, and selection on, these traits in a long-term study of a wild population of the collared flycatcher Ficedula albicollis. We used mixed model analysis with REML estimation ("animal models") to make full use of the information in complex multigenerational pedigrees. Heritability of laying date, but not clutch size, was lower than that estimated previously using parent-offspring regressions, although for both traits there was evidence of substantial additive genetic variance (h2 = 0.19 and 0.29, respectively). Laying date and clutch size were negatively genetically correlated (rA = -0.41 +/- 0.09), implying that selection on one of the traits would cause a correlated response in the other, but there was little evidence to suggest that evolution of either trait would be constrained by correlations with other phenotypic characters. Analysis of selection on these traits in females revealed consistent strong directional fecundity selection for earlier breeding at the level of the phenotype (beta = -0.28 +/- 0.03), but little evidence for stabilising selection on breeding time. We found no evidence that clutch size was independently under selection. Analysis of fecundity selection on breeding values for laying date, estimated from an animal model, indicated that selection acts directly on additive genetic variance underlying breeding time (beta = -0.20 +/- 0.04), but not on clutch size (beta = 0.03 +/- 0.05). In contrast, selection on laying date via adult female survival fluctuated in sign between years, and was opposite in sign for selection on phenotypes (negative) and breeding values (positive). Our data thus suggest that any evolutionary response to selection on

  20. Selective extraction of dimethoate from cucumber samples by use of molecularly imprinted microspheres

    Directory of Open Access Journals (Sweden)

    Jiao-Jiao Du

    2015-06-01

    Full Text Available Molecularly imprinted polymers for dimethoate recognition were synthesized by the precipitation polymerization technique using methyl methacrylate (MMA as the functional monomer and ethylene glycol dimethacrylate (EGDMA as the cross-linker. The morphology, adsorption and recognition properties were investigated by scanning electron microscopy (SEM, static adsorption test, and competitive adsorption test. To obtain the best selectivity and binding performance, the synthesis and adsorption conditions of MIPs were optimized through single factor experiments. Under the optimized conditions, the resultant polymers exhibited uniform size, satisfactory binding capacity and significant selectivity. Furthermore, the imprinted polymers were successfully applied as a specific solid-phase extractants combined with high performance liquid chromatography (HPLC for determination of dimethoate residues in the cucumber samples. The average recoveries of three spiked samples ranged from 78.5% to 87.9% with the relative standard deviations (RSDs less than 4.4% and the limit of detection (LOD obtained for dimethoate as low as 2.3 μg/mL. Keywords: Molecularly imprinted polymer, Precipitation polymerization, Dimethoate, Cucumber, HPLC

  1. 40 CFR 205.160-2 - Test sample selection and preparation.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test sample selection and preparation... sample selection and preparation. (a) Vehicles comprising the sample which are required to be tested... maintained in any manner unless such preparation, tests, modifications, adjustments or maintenance are part...

  2. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  4. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  5. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  6. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  7. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  8. Selective determination of dopamine using quantum-sized gold nanoparticles protected with charge selective ligands

    Science.gov (United States)

    Kwak, Kyuju; Kumar, S. Senthil; Lee, Dongil

    2012-06-01

    We report here the selective determination of dopamine (DA) using quantum-sized gold nanoparticles coated with charge selective ligands. Glutathione protected gold nanoparticles (GS-Au25) were synthesized and immobilized into a sol-gel matrix via thiol linkers. The GS-Au25 modified sol-gel electrode was found to show excellent electrocatalytic activity towards the oxidation of DA but no activity towards the oxidation of ascorbic acid. The role of electrostatic charge in the selective electrocatalytic activity of GS-Au25 was verified by voltammetry of redox markers carrying opposite charges. The pH dependent sensitivity for the determination of DA further confirmed the charge screening effect of GS-Au25. Mechanistic investigation revealed that the selectivity is attained by the selective formation of an electrostatic complex between the negatively charged GS-Au25 and DA cation. The GS-Au25 modified sol-gel electrode also showed excellent selectivity for DA in the presence of an interferent, ascorbic acid.We report here the selective determination of dopamine (DA) using quantum-sized gold nanoparticles coated with charge selective ligands. Glutathione protected gold nanoparticles (GS-Au25) were synthesized and immobilized into a sol-gel matrix via thiol linkers. The GS-Au25 modified sol-gel electrode was found to show excellent electrocatalytic activity towards the oxidation of DA but no activity towards the oxidation of ascorbic acid. The role of electrostatic charge in the selective electrocatalytic activity of GS-Au25 was verified by voltammetry of redox markers carrying opposite charges. The pH dependent sensitivity for the determination of DA further confirmed the charge screening effect of GS-Au25. Mechanistic investigation revealed that the selectivity is attained by the selective formation of an electrostatic complex between the negatively charged GS-Au25 and DA cation. The GS-Au25 modified sol-gel electrode also showed excellent selectivity for DA in the

  9. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  10. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  11. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  12. The quasar luminosity function from a variability-selected sample

    Science.gov (United States)

    Hawkins, M. R. S.; Veron, P.

    1993-01-01

    A sample of quasars is selected from a 10-yr sequence of 30 UK Schmidt plates. Luminosity functions are derived in several redshift intervals, which in each case show a featureless power-law rise towards low luminosities. There is no sign of the 'break' found in the recent UVX sample of Boyle et al. It is suggested that reasons for the disagreement are connected with biases in the selection of the UVX sample. The question of the nature of quasar evolution appears to be still unresolved.

  13. Size-selective separation of submicron particles in suspensions with ultrasonic atomization.

    Science.gov (United States)

    Nii, Susumu; Oka, Naoyoshi

    2014-11-01

    Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  15. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  16. Estimated ventricle size using Evans index: reference values from a population-based sample.

    Science.gov (United States)

    Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C

    2017-03-01

    Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.

  17. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  18. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  19. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  20. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  1. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  2. Size-selective performance evaluation of candidate aerosol inlets using polydisperse aerosols

    Data.gov (United States)

    U.S. Environmental Protection Agency — Presented are detailed techniques for the generation, collection, and analysis of polydisperse calibration aerosols for wind tunnel evaluation of size-selective...

  3. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  4. Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5

    International Nuclear Information System (INIS)

    Wilderman, S.J.; Bielajew, A.F.

    2005-01-01

    The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)

  5. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  6. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  7. The effect of size on the oxygen electroreduction activity of mass-selected platinum nanoparticles

    DEFF Research Database (Denmark)

    Pérez Alonso, Francisco; McCarthy, David N; Nierhoff, Anders

    2012-01-01

    A matter of size: The particle size effect on the activity of the oxygen reduction reaction of size-selected platinum clusters was studied. The ORR activity decreased with decreasing Pt nanoparticle size, corresponding to a decrease in the fraction of terraces on the surfaces of the Pt nanopartic...

  8. The Effect of Size on the Oxygen Electroreduction Activity of Mass‐Selected Platinum Nanoparticles

    DEFF Research Database (Denmark)

    Pérez Alonso, Francisco; McCarthy, David Norman; Nierhoff, Anders Ulrik Fregerslev

    2012-01-01

    A matter of size: The particle size effect on the activity of the oxygen reduction reaction of size-selected platinum clusters was studied. The ORR activity decreased with decreasing Pt nanoparticle size, corresponding to a decrease in the fraction of terraces on the surfaces of the Pt nanopartic...

  9. Group selection on population size affects life-history patterns in the entomopathogenic nematode Steinernema carpocapsae.

    Science.gov (United States)

    Bashey, Farrah; Lively, Curtis M

    2009-05-01

    Selection is recognized to operate on multiple levels. In disease organisms, selection among hosts is thought to provide an important counterbalance to selection for faster growth within hosts. We performed three experiments, each selecting for a divergence in group size in the entomopathogenic nematode, Steinernema carpocapsae. These nematodes infect and kill insect larvae, reproduce inside the host carcass, and emerge as infective juveniles. We imposed selection on group size by selecting among hosts for either high or low numbers of emerging nematodes. Our goal was to determine whether this trait could respond to selection at the group level, and if so, to examine what other traits would evolve as correlated responses. One of the three experiments showed a significant response to group selection. In that experiment, the high-selected treatment consistently produced more emerging nematodes per host than the low-selected treatment. In addition, nematodes were larger and they emerged later from hosts in the low-selected lines. Despite small effective population sizes, the effects of inbreeding were small in this experiment. Thus, selection among hosts can be effective, leading to both a direct evolutionary response at the population level, as well as to correlated responses in populational and individual traits.

  10. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Assembly of tantalum porous films with graded oxidation profile from size-selected nanoparticles

    Science.gov (United States)

    Singh, Vidyadhar; Grammatikopoulos, Panagiotis; Cassidy, Cathal; Benelmekki, Maria; Bohra, Murtaza; Hawash, Zafer; Baughman, Kenneth W.; Sowwan, Mukhles

    2014-05-01

    Functionally graded materials offer a way to improve the physical and chemical properties of thin films and coatings for different applications in the nanotechnology and biomedical fields. In this work, design and assembly of nanoporous tantalum films with a graded oxidation profile perpendicular to the substrate surface are reported. These nanoporous films are composed of size-selected, amorphous tantalum nanoparticles, deposited using a gas-aggregated magnetron sputtering system, and oxidized after coalescence, as samples evolve from mono- to multi-layered structures. Molecular dynamics computer simulations shed light on atomistic mechanisms of nanoparticle coalescence, which govern the films porosity. Aberration-corrected (S) TEM, GIXRD, AFM, SEM, and XPS were employed to study the morphology, phase and oxidation profiles of the tantalum nanoparticles, and the resultant films.

  12. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  13. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  14. HOT-DUST-POOR QUASARS IN MID-INFRARED AND OPTICALLY SELECTED SAMPLES

    International Nuclear Information System (INIS)

    Hao Heng; Elvis, Martin; Civano, Francesca; Lawrence, Andy

    2011-01-01

    We show that the hot-dust-poor (HDP) quasars, originally found in the X-ray-selected XMM-COSMOS type 1 active galactic nucleus (AGN) sample, are just as common in two samples selected at optical/infrared wavelengths: the Richards et al. Spitzer/SDSS sample (8.7% ± 2.2%) and the Palomar-Green-quasar-dominated sample of Elvis et al. (9.5% ± 5.0%). The properties of the HDP quasars in these two samples are consistent with the XMM-COSMOS sample, except that, at the 99% (∼ 2.5σ) significance, a larger proportion of the HDP quasars in the Spitzer/SDSS sample have weak host galaxy contributions, probably due to the selection criteria used. Either the host dust is destroyed (dynamically or by radiation) or is offset from the central black hole due to recoiling. Alternatively, the universality of HDP quasars in samples with different selection methods and the continuous distribution of dust covering factor in type 1 AGNs suggest that the range of spectral energy distributions could be related to the range of tilts in warped fueling disks, as in the model of Lawrence and Elvis, with HDP quasars having relatively small warps.

  15. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  16. Size-Selective Modes of Aeolian Transport on Earth and Mars

    Science.gov (United States)

    Swann, C.; Ewing, R. C.; Sherman, D. J.; McLean, C. J.

    2016-12-01

    Aeolian sand transport is a dominant driver of surface change and dust emission on Mars. Estimates of aeolian sand transport on Earth and Mars rely on terrestrial transport models that do not differentiate between transport modes (e.g., creep vs. saltation), which limits estimates of the critical threshold for transport and the total sand flux during a transport event. A gap remains in understanding how the different modes contribute to the total sand flux. Experiments conducted at the MARtian Surface WInd Tunnel separated modes of transport for uniform and mixed grain size surfaces at Earth and Martian atmospheric pressures. Crushed walnut shells with a density of 1.0 gm/cm3 were used. Experiments resolved grain size distributions for creeping and saltating grains over 3 uniform surfaces, U1, U2, and U3, with median grain sizes of 308 µm, 721 µm, and 1294 µm, and a mixed grain size surface, M1, with median grain sizes of 519 µm. A mesh trap located 5 cm above the test bed and a surface creep trap were deployed to capture particles moving as saltation and creep. Grains that entered the creep trap at angles ≥ 75° were categorized as moving in creep mode only. Only U1 and M1 surfaces captured enough surface creep at both Earth and Mars pressure for statistically significant grain size analysis. Our experiments show that size selective transport differs between Earth and Mars conditions. The median grain size of particles moving in creep for both uniform and mixed surfaces are larger under Earth conditions. (U1Earth = 385 µm vs. U1Mars = 355 µm; M1Earth = 762 vs. M1Mars = 697 µm ). However, particles moving in saltation were larger under Mars conditions (U1Earth = 282 µm; U1Mars = 309 µm; M1Earth = 347 µm; M1Mars = 454 µm ). Similar to terrestrial experiments, the median size of surface creep is larger than the median grain size of saltation. Median sizes of U1, U2, U3 at Mars conditions for creep was 355 µm, 774 µm and 1574 µm. Saltation at Mars

  17. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  18. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  19. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  20. Legal size limit implies strong fisheries selection on sexually selected traits in a temperate wrasse providing male-only parental care

    Directory of Open Access Journals (Sweden)

    Kim Aleksander Tallaksen Halvorsen

    2015-12-01

    Full Text Available Corkwing wrasse (Symphodus melops is a temperate wrasse displaying both sex and male dimorphism and is targeted in a size selective commercial fishery which has increased dramatically since 2008. Wrasses are supplied alive to salmon farms as cleaner fish to combat infestations of Salmon lice. In previous studies, growth and maturation has been found to differ among male morphs and sexes and these groups might therefore be targeted unevenly by the size selective fishery. In the present study, we address this by comparing size regulations and fishing practice with data on sex specific growth and maturation from Western and Southern Norway, two regions varying in density and life histories. Two years of field data on density and length measures was used together with a subsample of otoliths to determine sex specific growth patterns. In the region with high density, nesting males were found to grow faster and mature later than sneaker males and females. Here, most nesting males will reach the minimum size as juveniles, one and two years before females and sneakers respectively. In contrast, sexual dimorphism was much less pronounced in the low density region, and relaxed male-male competition over nesting sites seems a likely explanation for this pattern. Intensive harvesting with selective removal of the larger nesting males could potentially lead to short term effect such as sperm limitation and reduced offspring survival and thus affect the productivity of juveniles. In addition, the current fishing regime may select for reduced growth rates and earlier maturation and oppose sexual selection.

  1. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  2. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  3. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    International Nuclear Information System (INIS)

    Reiser, I; Lu, Z

    2014-01-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions included two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task

  4. Selective Hydrogenation of Acrolein Over Pd Model Catalysts: Temperature and Particle-Size Effects.

    Science.gov (United States)

    O'Brien, Casey P; Dostert, Karl-Heinz; Schauermann, Swetlana; Freund, Hans-Joachim

    2016-10-24

    The selectivity in the hydrogenation of acrolein over Fe 3 O 4 -supported Pd nanoparticles has been investigated as a function of nanoparticle size in the 220-270 K temperature range. While Pd(111) shows nearly 100 % selectivity towards the desired hydrogenation of the C=O bond to produce propenol, Pd nanoparticles were found to be much less selective towards this product. In situ detection of surface species by using IR-reflection absorption spectroscopy shows that the selectivity towards propenol critically depends on the formation of an oxopropyl spectator species. While an overlayer of oxopropyl species is effectively formed on Pd(111) turning the surface highly selective for propenol formation, this process is strongly hindered on Pd nanoparticles by acrolein decomposition resulting in CO formation. We show that the extent of acrolein decomposition can be tuned by varying the particle size and the reaction temperature. As a result, significant production of propenol is observed over 12 nm Pd nanoparticles at 250 K, while smaller (4 and 7 nm) nanoparticles did not produce propenol at any of the temperatures investigated. The possible origin of particle-size dependence of propenol formation is discussed. This work demonstrates that the selectivity in the hydrogenation of acrolein is controlled by the relative rates of acrolein partial hydrogenation to oxopropyl surface species and of acrolein decomposition, which has significant implications for rational catalyst design. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Body Size, Fecundity, and Sexual Size Dimorphism in the Neotropical Cricket Macroanaxipha macilenta (Saussure) (Orthoptera: Gryllidae).

    Science.gov (United States)

    Cueva Del Castillo, R

    2015-04-01

    Body size is directly or indirectly correlated with fitness. Body size, which conveys maximal fitness, often differs between sexes. Sexual size dimorphism (SSD) evolves because body size tends to be related to reproductive success through different pathways in males and females. In general, female insects are larger than males, suggesting that natural selection for high female fecundity could be stronger than sexual selection in males. I assessed the role of body size and fecundity in SSD in the Neotropical cricket Macroanaxipha macilenta (Saussure). This species shows a SSD bias toward males. Females did not present a correlation between number of eggs and body size. Nonetheless, there were fluctuations in the number of eggs carried by females during the sampling period, and the size of females that were collected carrying eggs was larger than that of females collected with no eggs. Since mating induces vitellogenesis in some cricket species, differences in female body size might suggest male mate choice. Sexual selection in the body size of males of M. macilenta may possibly be stronger than the selection of female fecundity. Even so, no mating behavior was observed during the field observations, including audible male calling or courtship songs, yet males may produce ultrasonic calls due to their size. If female body size in M. macilenta is not directly related to fecundity, the lack of a correlated response to selection on female body size could represent an alternate evolutionary pathway in the evolution of body size and SSD in insects.

  6. Selection of portable tools for use in a size reduction facility

    International Nuclear Information System (INIS)

    Hawley, L.N.

    1986-07-01

    A range of portable tools are identified for development and eventual use within a remote operations facility for the size reduction of plutonium contaminated materials. The process of selection defines the work to be performed within the facility and matches this to the general categories of suitable tools. Specific commercial tools are then selected or, where none exists, proposals are made for the development of special tools. (author)

  7. 6. Label-free selective plane illumination microscopy of tissue samples

    Directory of Open Access Journals (Sweden)

    Muteb Alharbi

    2017-10-01

    Conclusion: Overall this method meets the demands of the current needs for 3D imaging tissue samples in a label-free manner. Label-free Selective Plane Microscopy directly provides excellent information about the structure of the tissue samples. This work has highlighted the superiority of Label-free Selective Plane Microscopy to current approaches to label-free 3D imaging of tissue.

  8. The Impact of Nutrition and Health Claims on Consumer Perceptions and Portion Size Selection: Results from a Nationally Representative Survey

    Science.gov (United States)

    Benson, Tony; Lavelle, Fiona; McCloat, Amanda; Mooney, Elaine; Egan, Bernadette; Collins, Clare E.; Dean, Moira

    2018-01-01

    Nutrition and health claims on foods can help consumers make healthier food choices. However, claims may have a ‘halo’ effect, influencing consumer perceptions of foods and increasing consumption. Evidence for these effects are typically demonstrated in experiments with small samples, limiting generalisability. The current study aimed to overcome this limitation through the use of a nationally representative survey. In a cross-sectional survey of 1039 adults across the island of Ireland, respondents were presented with three different claims (nutrition claim = “Low in fat”; health claim = “With plant sterols. Proven to lower cholesterol”; satiety claim = “Fuller for longer”) on four different foods (cereal, soup, lasagne, and yoghurt). Participants answered questions on perceived healthiness, tastiness, and fillingness of the products with different claims and also selected a portion size they would consume. Claims influenced fillingness perceptions of some of the foods. However, there was little influence of claims on tastiness or healthiness perceptions or the portion size selected. Psychological factors such as consumers’ familiarity with foods carrying claims and belief in the claims were the most consistent predictors of perceptions and portion size selection. Future research should identify additional consumer factors that may moderate the relationships between claims, perceptions, and consumption. PMID:29789472

  9. The Impact of Nutrition and Health Claims on Consumer Perceptions and Portion Size Selection: Results from a Nationally Representative Survey

    Directory of Open Access Journals (Sweden)

    Tony Benson

    2018-05-01

    Full Text Available Nutrition and health claims on foods can help consumers make healthier food choices. However, claims may have a ‘halo’ effect, influencing consumer perceptions of foods and increasing consumption. Evidence for these effects are typically demonstrated in experiments with small samples, limiting generalisability. The current study aimed to overcome this limitation through the use of a nationally representative survey. In a cross-sectional survey of 1039 adults across the island of Ireland, respondents were presented with three different claims (nutrition claim = “Low in fat”; health claim = “With plant sterols. Proven to lower cholesterol”; satiety claim = “Fuller for longer” on four different foods (cereal, soup, lasagne, and yoghurt. Participants answered questions on perceived healthiness, tastiness, and fillingness of the products with different claims and also selected a portion size they would consume. Claims influenced fillingness perceptions of some of the foods. However, there was little influence of claims on tastiness or healthiness perceptions or the portion size selected. Psychological factors such as consumers’ familiarity with foods carrying claims and belief in the claims were the most consistent predictors of perceptions and portion size selection. Future research should identify additional consumer factors that may moderate the relationships between claims, perceptions, and consumption.

  10. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  11. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  12. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  13. Genome size variation affects song attractiveness in grasshoppers: evidence for sexual selection against large genomes.

    Science.gov (United States)

    Schielzeth, Holger; Streitner, Corinna; Lampe, Ulrike; Franzke, Alexandra; Reinhold, Klaus

    2014-12-01

    Genome size is largely uncorrelated to organismal complexity and adaptive scenarios. Genetic drift as well as intragenomic conflict have been put forward to explain this observation. We here study the impact of genome size on sexual attractiveness in the bow-winged grasshopper Chorthippus biguttulus. Grasshoppers show particularly large variation in genome size due to the high prevalence of supernumerary chromosomes that are considered (mildly) selfish, as evidenced by non-Mendelian inheritance and fitness costs if present in high numbers. We ranked male grasshoppers by song characteristics that are known to affect female preferences in this species and scored genome sizes of attractive and unattractive individuals from the extremes of this distribution. We find that attractive singers have significantly smaller genomes, demonstrating that genome size is reflected in male courtship songs and that females prefer songs of males with small genomes. Such a genome size dependent mate preference effectively selects against selfish genetic elements that tend to increase genome size. The data therefore provide a novel example of how sexual selection can reinforce natural selection and can act as an agent in an intragenomic arms race. Furthermore, our findings indicate an underappreciated route of how choosy females could gain indirect benefits. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  14. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  15. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  16. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  17. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  18. The impact of feedstock cost on technology selection and optimum size

    International Nuclear Information System (INIS)

    Cameron, Jay B.; Kumar, Amit; Flynn, Peter C.

    2007-01-01

    Development of biomass projects at optimum size and technology enhances the role that biomass can make in mitigating greenhouse gas. Optimum sized plants can be built when biomass resources are sufficient to meet feedstock demand; examples include wood and forest harvest residues from extensive forests, and grain straw and corn stover from large agricultural regions. The impact of feedstock cost on technology selection is evaluated by comparing the cost of power from the gasification and direct combustion of boreal forest wood chips. Optimum size is a function of plant cost and the distance variable cost (DVC, $ dry tonne -1 km -1 ) of the biomass fuel; distance fixed costs (DFC, $ dry tonne -1 ) such as acquisition, harvesting, loading and unloading do not impact optimum size. At low values of DVC and DFC, as occur with wood chips sourced from the boreal forest, direct combustion has a lower power cost than gasification. At higher values of DVC and DFC, gasification has a lower power cost than direct combustion. This crossover in most economic technology will always arise when a more efficient technology with a higher capital cost per unit of output is compared to a less efficient technology with a lower capital cost per unit of output. In such cases technology selection cannot be separated from an analysis of feedstock cost

  19. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  20. Effect of directional selection for body size on fluctuating asymmetry ...

    Indian Academy of Sciences (India)

    In this study, we investigated whether stress caused by artificial bidirectional selection for body size has any effect on the levels of FA of different morphological traits in Drosophila ananassae. The realised heritability (h2) was higher in low-line females and high-line males, which suggests an asymmetrical response to ...

  1. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    Science.gov (United States)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a

  2. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  3. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Salts-based size-selective precipitation: toward mass precipitation of aqueous nanoparticles.

    Science.gov (United States)

    Wang, Chun-Lei; Fang, Min; Xu, Shu-Hong; Cui, Yi-Ping

    2010-01-19

    Purification is a necessary step before the application of nanocrystals (NCs), since the excess matter in nanoparticles solution usually causes a disadvantage to their subsequent coupling or assembling with other materials. In this work, a novel salts-based precipitation technique is originally developed for the precipitation and size-selective precipitation of aqueous NCs. Simply by addition of salts, NCs can be precipitated from the solution. After decantation of the supernatant solution, the precipitates can be dispersed in water again. By means of adjusting the addition amount of salt, size-selective precipitation of aqueous NCs can be achieved. Namely, the NCs with large size are precipitated preferentially, leaving small NCs in solution. Compared with the traditional nonsolvents-based precipitation technique, the current one is simpler and more rapid due to the avoidance of condensation and heating manipulations used in the traditional precipitation process. Moreover, the salts-based precipitation technique was generally available for the precipitation of aqueous nanoparticles, no matter if there were semiconductor NCs or metal nanoparticles. Simultaneously, the cost of the current method is also much lower than that of the traditional nonsolvents-based precipitation technique, making it applicable for mass purification of aqueous NCs.

  5. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  6. Sample size matters in dietary gene expression studies—A case study in the gilthead sea bream (Sparus aurata L.

    Directory of Open Access Journals (Sweden)

    Fotini Kokou

    2016-05-01

    Full Text Available One of the main concerns in gene expression studies is the calculation of statistical significance which in most cases remains low due to limited sample size. Increasing biological replicates translates into more effective gains in power which, especially in nutritional experiments, is of great importance as individual variation of growth performance parameters and feed conversion is high. The present study investigates in the gilthead sea bream Sparus aurata, one of the most important Mediterranean aquaculture species. For 24 gilthead sea bream individuals (biological replicates the effects of gradual substitution of fish meal by plant ingredients (0% (control, 25%, 50% and 75% in the diets were studied by looking at expression levels of four immune-and stress-related genes in intestine, head kidney and liver. The present results showed that only the lowest substitution percentage is tolerated and that liver is the most sensitive tissue to detect gene expression variations in relation to fish meal substituted diets. Additionally the usage of three independent biological replicates were evaluated by calculating the averages of all possible triplets in order to assess the suitability of selected genes for stress indication as well as the impact of the experimental set up, thus in the present work the impact of FM substitution. Gene expression was altered depending of the selected biological triplicate. Only for two genes in liver (hsp70 and tgf significant differential expression was assured independently of the triplicates used. These results underlined the importance of choosing the adequate sample number especially when significant, but minor differences in gene expression levels are observed. Keywords: Sample size, Gene expression, Fish meal replacement, Immune response, Gilthead sea bream

  7. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  8. Storage Tanks - Selection Of Type, Design Code And Tank Sizing

    International Nuclear Information System (INIS)

    Shatla, M.N; El Hady, M.

    2004-01-01

    The present work gives an insight into the proper selection of type, design code and sizing of storage tanks used in the Petroleum and Process industries. In this work, storage tanks are classified based on their design conditions. Suitable design codes and their limitations are discussed for each tank type. The option of storage under high pressure and ambient temperature, in spherical and cigar tanks, is compared to the option of storage under low temperature and slight pressure (close to ambient) in low temperature and cryogenic tanks. The discussion is extended to the types of low temperature and cryogenic tanks and recommendations are given to select their types. A study of pressurized tanks designed according to ASME code, conducted in the present work, reveals that tanks designed according to ASME Section VIII DIV 2 provides cost savings over tanks designed according to ASME Section VIII DlV 1. The present work is extended to discuss the parameters that affect sizing of flat bottom cylindrical tanks. The analysis shows the effect of height-to-diameter ratio on tank instability and foundation loads

  9. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  10. Enhanced Size Selection in Two-Photon Excitation for CsPbBr3 Perovskite Nanocrystals.

    Science.gov (United States)

    Chen, Junsheng; Chábera, Pavel; Pascher, Torbjörn; Messing, Maria E; Schaller, Richard; Canton, Sophie; Zheng, Kaibo; Pullerits, Tõnu

    2017-10-19

    Cesium lead bromide (CsPbBr 3 ) perovskite nanocrystals (NCs), with large two-photon absorption (TPA) cross-section and bright photoluminescence (PL), have been demonstrated as stable two-photon-pumped lasing medium. With two-photon excitation, red-shifted PL spectrum and increased PL lifetime is observed compared with one-photon excitation. We have investigated the origin of such difference using time-resolved laser spectroscopies. We ascribe the difference to the enhanced size selection of NCs by two-photon excitation. Because of inherent nonlinearity, the size dependence of absorption cross-section under TPA is stronger. Consequently, larger size NCs are preferably excited, leading to longer excited-state lifetime and red-shifted PL emission. In a broad view, the enhanced size selection in two-photon excitation of CsPbBr 3 NCs is likely a general feature of the perovskite NCs and can be tuned via NC size distribution to influence their performance within NC-based nonlinear optical materials and devices.

  11. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  12. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  13. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  14. Assessment of Competence in EVAR Stent Graft Sizing and Selection

    DEFF Research Database (Denmark)

    Strøm, M; Lönn, L.; Bech, B.

    2017-01-01

    Objectives and background: The aims of this study were to develop a test of competence in endovascular aortic repair (EVAR) stent graft sizing and selection; to examine the test for evidence of validity; and to explore the experience required for the task. Methods: The test was developed based...... measurements, Mann-Whitney U test could discriminate between experts and novices (p = .002), between experts and intermediates (p = .010), and between novices and intermediates (p = .036). In stent selection the experts performed significantly better than both the novices and the intermediates (p = .002 and p...... of competence in vessel analysis and stent graft selection for endovascular aortic repair. This was supported by strong validity evidence with good internal consistency and discriminatory ability. The tool may be used to facilitate training and certification of future endovascular specialists....

  15. Data Quality Objectives For Selecting Waste Samples For Bench-Scale Reformer Treatability Studies

    International Nuclear Information System (INIS)

    Banning, D.L.

    2011-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required. The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.

  16. Correlations Between Life-Detection Techniques and Implications for Sampling Site Selection in Planetary Analog Missions

    Science.gov (United States)

    Gentry, Diana M.; Amador, Elena S.; Cable, Morgan L.; Chaudry, Nosheen; Cullen, Thomas; Jacobsen, Malene B.; Murukesan, Gayathri; Schwieterman, Edward W.; Stevens, Adam H.; Stockton, Amanda; Tan, George; Yin, Chang; Cullen, David C.; Geppert, Wolf

    2017-10-01

    We conducted an analog sampling expedition under simulated mission constraints to areas dominated by basaltic tephra of the Eldfell and Fimmvörðuháls lava fields (Iceland). Sites were selected to be "homogeneous" at a coarse remote sensing resolution (10-100 m) in apparent color, morphology, moisture, and grain size, with best-effort realism in numbers of locations and replicates. Three different biomarker assays (counting of nucleic-acid-stained cells via fluorescent microscopy, a luciferin/luciferase assay for adenosine triphosphate, and quantitative polymerase chain reaction (qPCR) to detect DNA associated with bacteria, archaea, and fungi) were characterized at four nested spatial scales (1 m, 10 m, 100 m, and >1 km) by using five common metrics for sample site representativeness (sample mean variance, group F tests, pairwise t tests, and the distribution-free rank sum H and u tests). Correlations between all assays were characterized with Spearman's rank test. The bioluminescence assay showed the most variance across the sites, followed by qPCR for bacterial and archaeal DNA; these results could not be considered representative at the finest resolution tested (1 m). Cell concentration and fungal DNA also had significant local variation, but they were homogeneous over scales of >1 km. These results show that the selection of life detection assays and the number, distribution, and location of sampling sites in a low biomass environment with limited a priori characterization can yield both contrasting and complementary results, and that their interdependence must be given due consideration to maximize science return in future biomarker sampling expeditions.

  17. Body size, swimming speed, or thermal sensitivity? Predator-imposed selection on amphibian larvae

    Czech Academy of Sciences Publication Activity Database

    Gvoždík, Lumír; Smolinský, Radovan

    2015-01-01

    Roč. 15, č. 1 (2015), č. článku 238. ISSN 1471-2148 R&D Projects: GA ČR GAP506/10/2170; GA ČR(CZ) GA15-07140S Institutional support: RVO:68081766 Keywords : Antipredator strategies * Ichthyosaura * Newts * Performance-fitness * Predator –prey interaction * Predator –prey size ratio * Selection differential * Selection experiment * Viability selection Subject RIV: EG - Zoology Impact factor: 3.406, year: 2015

  18. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    Science.gov (United States)

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the

  19. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size

    Science.gov (United States)

    Andersen, Lau Møller; Blicher, Jakob Udby

    2017-01-01

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the

  20. Selection on male size, leg length and condition during mate search in a sexually highly dimorphic orb-weaving spider.

    Science.gov (United States)

    Foellmer, Matthias W; Fairbairn, Daphne J

    2005-02-01

    Mate search plays a central role in hypotheses for the adaptive significance of extreme female-biased sexual size dimorphism (SSD) in animals. Spiders (Araneae) are the only free-living terrestrial taxon where extreme SSD is common. The "gravity hypothesis" states that small body size in males is favoured during mate search in species where males have to climb to reach females, because body length is inversely proportional to achievable speed on vertical structures. However, locomotive performance of males may also depend on relative leg length. Here we examine selection on male body size and leg length during mate search in the highly dimorphic orb-weaving spider Argiope aurantia, using a multivariate approach to distinguish selection targeted at different components of size. Further, we investigate the scaling relationships between male size and energy reserves, and the differential loss of reserves. Adult males do not feed while roving, and a size-dependent differential energy storage capacity may thus affect male performance during mate search. Contrary to predictions, large body size was favoured in one of two populations, and this was due to selection for longer legs. Male size was not under selection in the second population, but we detected direct selection for longer third legs. Males lost energy reserves during mate search, but this was independent of male size and storage capacity scaled isometrically with size. Thus, mate search is unlikely to lead to selection for small male size, but the hypothesis that relatively longer legs in male spiders reflect a search-adapted morphology is supported.

  1. Robust online tracking via adaptive samples selection with saliency detection

    Science.gov (United States)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  2. Differences in productive robustness in rabbits selected for reproductive longevity or litter size

    DEFF Research Database (Denmark)

    Theilgaard, R; Baselga, M; Blas, E

    2009-01-01

    The aim of this work was to evaluate the ability of a line selected for reproductive longevity (LP) to confront productive challenges compared to a line selected during 31 generations for litter size at weaning (V). A total of 133 reproductive rabbit does were used (72 and 61 from LP and V lines,...

  3. Preliminarily study on the maximum handling size, prey size and species selectivity of growth hormone transgenic and non-transgenic common carp Cyprinus carpio when foraging on gastropods

    Science.gov (United States)

    Zhu, Tingbing; Zhang, Lihong; Zhang, Tanglin; Wang, Yaping; Hu, Wei; Olsen, Rolf Eric; Zhu, Zuoyan

    2017-10-01

    The present study preliminarily examined the differences in maximum handling size, prey size and species selectivity of growth hormone transgenic and non-transgenic common carp Cyprinus carpio when foraging on four gastropods species (Bellamya aeruginosa, Radix auricularia, Parafossarulus sinensis and Alocinma longicornis) under laboratory conditions. In the maximum handling size trial, five fish from each age group (1-year-old and 2-year-old) and each genotype (transgenic and non-transgenic) of common carp were individually allowed to feed on B. aeruginosa with wide shell height range. The results showed that maximum handling size increased linearly with fish length, and there was no significant difference in maximum handling size between the two genotypes. In the size selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on three size groups of B. aeruginosa. The results show that the two genotypes of C. carpio favored the small-sized group over the large-sized group. In the species selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on thin-shelled B. aeruginosa and thick-shelled R. auricularia, and five pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on two gastropods species (P. sinensis and A. longicornis) with similar size and shell strength. The results showed that both genotypes preferred thin-shelled Radix auricularia rather than thick-shelled B. aeruginosa, but there were no significant difference in selectivity between the two genotypes when fed on P. sinensis and A. longicornis. The present study indicates that transgenic and non-transgenic C. carpio show similar selectivity of predation on the size- and species-limited gastropods. While this information may be useful for assessing the environmental risk of transgenic carp, it does not necessarily demonstrate that transgenic common carp might

  4. Women's Preferences for Penis Size: A New Research Method Using Selection among 3D Models.

    Science.gov (United States)

    Prause, Nicole; Park, Jaymie; Leung, Shannon; Miller, Geoffrey

    2015-01-01

    Women's preferences for penis size may affect men's comfort with their own bodies and may have implications for sexual health. Studies of women's penis size preferences typically have relied on their abstract ratings or selecting amongst 2D, flaccid images. This study used haptic stimuli to allow assessment of women's size recall accuracy for the first time, as well as examine their preferences for erect penis sizes in different relationship contexts. Women (N = 75) selected amongst 33, 3D models. Women recalled model size accurately using this method, although they made more errors with respect to penis length than circumference. Women preferred a penis of slightly larger circumference and length for one-time (length = 6.4 inches/16.3 cm, circumference = 5.0 inches/12.7 cm) versus long-term (length = 6.3 inches/16.0 cm, circumference = 4.8 inches/12.2 cm) sexual partners. These first estimates of erect penis size preferences using 3D models suggest women accurately recall size and prefer penises only slightly larger than average.

  5. Women's Preferences for Penis Size: A New Research Method Using Selection among 3D Models.

    Directory of Open Access Journals (Sweden)

    Nicole Prause

    Full Text Available Women's preferences for penis size may affect men's comfort with their own bodies and may have implications for sexual health. Studies of women's penis size preferences typically have relied on their abstract ratings or selecting amongst 2D, flaccid images. This study used haptic stimuli to allow assessment of women's size recall accuracy for the first time, as well as examine their preferences for erect penis sizes in different relationship contexts. Women (N = 75 selected amongst 33, 3D models. Women recalled model size accurately using this method, although they made more errors with respect to penis length than circumference. Women preferred a penis of slightly larger circumference and length for one-time (length = 6.4 inches/16.3 cm, circumference = 5.0 inches/12.7 cm versus long-term (length = 6.3 inches/16.0 cm, circumference = 4.8 inches/12.2 cm sexual partners. These first estimates of erect penis size preferences using 3D models suggest women accurately recall size and prefer penises only slightly larger than average.

  6. Portion, package or tableware size for changing selection and consumption of food, alcohol and tobacco

    Science.gov (United States)

    Hollands, Gareth J; Shemilt, Ian; Marteau, Theresa M; Jebb, Susan A; Lewis, Hannah B; Wei, Yinghui; Higgins, Julian Pt; Ogilvie, David

    2015-01-01

    Background Overeating and harmful alcohol and tobacco use have been linked to the aetiology of various non-communicable diseases, which are among the leading global causes of morbidity and premature mortality. As people are repeatedly exposed to varying sizes and shapes of food, alcohol and tobacco products in environments such as shops, restaurants, bars and homes, this has stimulated public health policy interest in product size and shape as potential targets for intervention. Objectives 1) To assess the effects of interventions involving exposure to different sizes or sets of physical dimensions of a portion, package, individual unit or item of tableware on unregulated selection or consumption of food, alcohol or tobacco products in adults and children. 2) To assess the extent to which these effects may be modified by study, intervention and participant characteristics. Search methods We searched CENTRAL, MEDLINE, EMBASE, PsycINFO, eight other published or grey literature databases, trial registries and key websites up to November 2012, followed by citation searches and contacts with study authors. This original search identified eligible studies published up to July 2013, which are fully incorporated into the review. We conducted an updated search up to 30 January 2015 but further eligible studies are not yet fully incorporated due to their minimal potential to change the conclusions. Selection criteria Randomised controlled trials with between-subjects (parallel-group) or within-subjects (cross-over) designs, conducted in laboratory or field settings, in adults or children. Eligible studies compared at least two groups of participants, each exposed to a different size or shape of a portion of a food (including non-alcoholic beverages), alcohol or tobacco product, its package or individual unit size, or of an item of tableware used to consume it, and included a measure of unregulated selection or consumption of food, alcohol or tobacco. Data collection and

  7. Selection of the Sample for Data-Driven $Z \\to \

    CERN Document Server

    Krauss, Martin

    2009-01-01

    The topic of this study was to improve the selection of the sample for data-driven Z → ν ν background estimation, which is a major contribution in supersymmetric searches in ̄ a no-lepton search mode. The data is based on Z → + − samples using data created with ATLAS simulation software. This method works if two leptons are reconstructed, but using cuts that are typical for SUSY searches reconstruction efficiency for electrons and muons is rather low. For this reason it was tried to enhance the data sample. Therefore events were considered, where only one electron was reconstructed. In this case the invariant mass for the electron and each jet was computed to select the jet with the best match for the Z boson mass as not reconstructed electron. This way the sample can be extended but significantly looses purity because of also reconstructed background events. To improve this method other variables have to be considered which were not available for this study. Applying a similar method to muons using ...

  8. The Macdonald and Savage titrimetric procedure scaled down to 4 mg sized plutonium samples. P. 1

    International Nuclear Information System (INIS)

    Kuvik, V.; Lecouteux, C.; Doubek, N.; Ronesch, K.; Jammet, G.; Bagliano, G.; Deron, S.

    1992-01-01

    The original Macdonald and Savage amperometric method scaled down to milligram-sized plutonium samples was further modified. The electro-chemical process of each redox step and the end-point of the final titration were monitored potentiometrically. The method is designed to determine 4 mg of plutonium dissolved in nitric acid solution. It is suitable for the direct determination of plutonium in non-irradiated fuel with a uranium-to-plutonium ratio of up to 30. The precision and accuracy are ca. 0.05-0.1% (relative standard deviation). Although the procedure is very selective, the following species interfere: vanadyl(IV) and vanadate (almost quantitatively), neptunium (one electron exchange per mole), nitrites, fluorosilicates (milligram amounts yield a slight bias) and iodates. (author). 15 refs.; 8 figs.; 7 tabs

  9. Transgender Population Size in the United States: a Meta-Regression of Population-Based Probability Samples

    Science.gov (United States)

    Sevelius, Jae M.

    2017-01-01

    Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask

  10. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  11. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  12. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  13. Assessment of bone biopsy needles for sample size, specimen quality and ease of use

    International Nuclear Information System (INIS)

    Roberts, C.C.; Liu, P.T.; Morrison, W.B.; Leslie, K.O.; Carrino, J.A.; Lozevski, J.L.

    2005-01-01

    To assess whether there are significant differences in ease of use and quality of samples among several bone biopsy needles currently available. Eight commonly used, commercially available bone biopsy needles of different gauges were evaluated. Each needle was used to obtain five consecutive samples from a lamb lumbar pedicle. Subjective assessment of ease of needle use, ease of sample removal from the needle and sample quality, before and after fixation, was graded on a 5-point scale. The number of attempts necessary to reach a 1 cm depth was recorded. Each biopsy specimen was measured in the gross state and after fixation. The RADI Bonopty 15 g and Kendall Monoject J-type 11 g needles were rated the easiest to use, while the Parallax Core-Assure 11 g and the Bard Ostycut 16 g were rated the most difficult. Parallax Core-Assure and Kendall Monoject needles had the highest quality specimen in the gross state; Cook Elson/Ackerman 14 g and Bard Ostycut 16 g needles yielded the lowest. The MD Tech without Trap-Lok 11 g needle had the highest quality core after fixation, while the Bard Ostycut 16 g had the lowest. There was a significant difference in pre-fixation sample length between needles (P<0.0001), despite acquiring all cores to a standard 1 cm depth. Core length and width decrease in size by an average of 28% and 42% after fixation. Bone biopsy needles vary significantly in performance. Detailed knowledge of the strengths and weaknesses of different needles is important to make an appropriate selection for each individual's practice. (orig.)

  14. Stability of selected volatile breath constituents in Tedlar, Kynar and Flexfilm sampling bags

    Science.gov (United States)

    Mochalski, Paweł; King, Julian; Unterkofler, Karl; Amann, Anton

    2016-01-01

    The stability of 41 selected breath constituents in three types of polymer sampling bags, Tedlar, Kynar, and Flexfilm, was investigated using solid phase microextraction and gas chromatography mass spectrometry. The tested molecular species belong to different chemical classes (hydrocarbons, ketones, aldehydes, aromatics, sulphurs, esters, terpenes, etc.) and exhibit close-to-breath low ppb levels (3–12 ppb) with the exception of isoprene, acetone and acetonitrile (106 ppb, 760 ppb, 42 ppb respectively). Stability tests comprised the background emission of contaminants, recovery from dry samples, recovery from humid samples (RH 80% at 37 °C), influence of the bag’s filling degree, and reusability. Findings yield evidence of the superiority of Tedlar bags over remaining polymers in terms of background emission, species stability (up to 7 days for dry samples), and reusability. Recoveries of species under study suffered from the presence of high amounts of water (losses up to 10%). However, only heavier volatiles, with molecular masses higher than 90, exhibited more pronounced losses (20–40%). The sample size (the degree of bag filling) was found to be one of the most important factors affecting the sample integrity. To sum up, it is recommended to store breath samples in pre-conditioned Tedlar bags up to 6 hours at the maximum possible filling volume. Among the remaining films, Kynar can be considered as an alternative to Tedlar; however, higher losses of compounds should be expected even within the first hours of storage. Due to the high background emission Flexfilm is not suitable for sampling and storage of samples for analyses aiming at volatiles at a low ppb level. PMID:23323261

  15. A statistical rationale for establishing process quality control limits using fixed sample size, for critical current verification of SSC superconducting wire

    International Nuclear Information System (INIS)

    Pollock, D.A.; Brown, G.; Capone, D.W. II; Christopherson, D.; Seuntjens, J.M.; Woltz, J.

    1992-01-01

    This work has demonstrated the statistical concepts behind the XBAR R method for determining sample limits to verify billet I c performance and process uniformity. Using a preliminary population estimate for μ and σ from a stable production lot of only 5 billets, we have shown that reasonable sensitivity to systematic process drift and random within billet variation may be achieved, by using per billet subgroup sizes of moderate proportions. The effects of subgroup size (n) and sampling risk (α and β) on the calculated control limits have been shown to be important factors that need to be carefully considered when selecting an actual number of measurements to be used per billet, for each supplier process. Given the present method of testing in which individual wire samples are ramped to I c only once, with measurement uncertainty due to repeatability and reproducibility (typically > 1.4%), large subgroups (i.e. >30 per billet) appear to be unnecessary, except as an inspection tool to confirm wire process history for each spool. The introduction of the XBAR R method or a similar Statistical Quality Control procedure is recommend for use in the superconducing wire production program, particularly when the program transitions from requiring tests for all pieces of wire to sampling each production unit

  16. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  17. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  18. Size selectivity of commercial (300 MC) and larger square mesh top ...

    African Journals Online (AJOL)

    In the present study, size selectivity of a commercial (300 MC) and a larger square mesh top panel (LSMTPC) codend for blue whiting (Micromesistius poutassou) were tested on a commercial trawl net in the international waters between Turkey and Greece. Trawling, performed during daylight was carried out at depths ...

  19. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  20. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  1. Morphological impact on the reaction kinetics of size-selected cobalt oxide nanoparticles

    International Nuclear Information System (INIS)

    Bartling, Stephan; Meiwes-Broer, Karl-Heinz; Barke, Ingo; Pohl, Marga-Martina

    2015-01-01

    Apart from large surface areas, low activation energies are essential for efficient reactions, particularly in heterogeneous catalysis. Here, we show that not only the size of nanoparticles but also their detailed morphology can crucially affect reaction kinetics, as demonstrated for mass-selected, soft-landed, and oxidized cobalt clusters in a 6 nm to 18 nm size range. The method of reflection high-energy electron diffraction is extended to the quantitative determination of particle activation energies which is applied for repeated oxidation and reduction cycles at the same particles. We find unexpectedly small activation barriers for the reduction reaction of the largest particles studied, despite generally increasing barriers for growing sizes. We attribute these observations to the interplay of reaction-specific material transport with a size-dependent inner particle morphology

  2. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  3. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  4. 40 CFR 205.171-2 - Test exhaust system sample selection and preparation.

    Science.gov (United States)

    2010-07-01

    ... Systems § 205.171-2 Test exhaust system sample selection and preparation. (a)(1) Exhaust systems... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test exhaust system sample selection and preparation. 205.171-2 Section 205.171-2 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  5. Selective information sampling

    Directory of Open Access Journals (Sweden)

    Peter A. F. Fraser-Mackenzie

    2009-06-01

    Full Text Available This study investigates the amount and valence of information selected during single item evaluation. One hundred and thirty-five participants evaluated a cell phone by reading hypothetical customers reports. Some participants were first asked to provide a preliminary rating based on a picture of the phone and some technical specifications. The participants who were given the customer reports only after they made a preliminary rating exhibited valence bias in their selection of customers reports. In contrast, the participants that did not make an initial rating sought subsequent information in a more balanced, albeit still selective, manner. The preliminary raters used the least amount of information in their final decision, resulting in faster decision times. The study appears to support the notion that selective exposure is utilized in order to develop cognitive coherence.

  6. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  7. Site-specific fragmentation of polystyrene molecule using size-selected Ar gas cluster ion beam

    International Nuclear Information System (INIS)

    Moritani, Kousuke; Mukai, Gen; Hashinokuchi, Michihiro; Mochiji, Kozo

    2009-01-01

    The secondary ion mass spectrum (SIMS) of a polystyrene thin film was investigated using a size-selected Ar gas cluster ion beam (GCIB). The fragmentation in the SIM spectrum varied by kinetic energy per atom (E atom ); the E atom dependence of the secondary ion intensity of the fragment species of polystyrene can be essentially classified into three types based on the relationship between E atom and the dissociation energy of a specific bonding site in the molecule. These results indicate that adjusting E atom of size-selected GCIB may realize site-specific bond breaking within a molecule. (author)

  8. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  9. Ultrathin self-assembled anionic polymer membranes for superfast size-selective separation

    Science.gov (United States)

    Deng, Chao; Zhang, Qiu Gen; Han, Guang Lu; Gong, Yi; Zhu, Ai Mei; Liu, Qing Lin

    2013-10-01

    Nanoporous membranes with superior separation performance have become more crucial with increasing concerns in functional nanomaterials. Here novel ultrahigh permeable nanoporous membranes have been fabricated on macroporous supports by self-assembly of anionic polymer on copper hydroxide nanostrand templates in organic solution. This facile approach has a great potential for the fabrication of ultrathin anionic polymer membranes as a general method. The as-fabricated self-assembled membranes have a mean pore size of 5-12 nm and an adjustable thickness as low as 85 nm. They allow superfast permeation of water, and exhibit excellent size-selective separation properties and good fouling resistance for negatively-charged solutes during filtration. The 85 nm thick membrane has an ultrahigh water flux (3306 l m-2 h-1 bar-1) that is an order of magnitude larger than commercial membranes, and can highly efficiently separate 5 and 15 nm gold nanoparticles from their mixtures. The newly developed nanoporous membranes have a wide application in separation and purification of biomacromolecules and nanoparticles.Nanoporous membranes with superior separation performance have become more crucial with increasing concerns in functional nanomaterials. Here novel ultrahigh permeable nanoporous membranes have been fabricated on macroporous supports by self-assembly of anionic polymer on copper hydroxide nanostrand templates in organic solution. This facile approach has a great potential for the fabrication of ultrathin anionic polymer membranes as a general method. The as-fabricated self-assembled membranes have a mean pore size of 5-12 nm and an adjustable thickness as low as 85 nm. They allow superfast permeation of water, and exhibit excellent size-selective separation properties and good fouling resistance for negatively-charged solutes during filtration. The 85 nm thick membrane has an ultrahigh water flux (3306 l m-2 h-1 bar-1) that is an order of magnitude larger than

  10. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  11. The impact of Nassau grouper size and abundance on scuba diver site selection and MPA economics

    NARCIS (Netherlands)

    Rudd, M.A.; Tupper, M.H.

    2002-01-01

    Since many fisheries are size-selective, the establishment of marine protected areas (MPAs) is expected to increase both the average size and abundance of exploited species, such as the valuable but vulnerable Nassau grouper ( Epinephelus striatus ). Increases in mean size and/or abundance of

  12. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  13. An improved selective sampling method

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  14. SDSS-IV MaNGA: faint quenched galaxies - I. Sample selection and evidence for environmental quenching

    Science.gov (United States)

    Penny, Samantha J.; Masters, Karen L.; Weijmans, Anne-Marie; Westfall, Kyle B.; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Falcón-Barroso, Jesús; Law, David; Nichol, Robert C.; Thomas, Daniel; Bizyaev, Dmitry; Brownstein, Joel R.; Freischlad, Gordon; Gaulme, Patrick; Grabowski, Katie; Kinemuchi, Karen; Malanushenko, Elena; Malanushenko, Viktor; Oravetz, Daniel; Roman-Lopes, Alexandre; Pan, Kaike; Simmons, Audrey; Wake, David A.

    2016-11-01

    Using kinematic maps from the Sloan Digital Sky Survey (SDSS) Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey, we reveal that the majority of low-mass quenched galaxies exhibit coherent rotation in their stellar kinematics. Our sample includes all 39 quenched low-mass galaxies observed in the first year of MaNGA. The galaxies are selected with Mr > -19.1, stellar masses 109 M⊙ 1.9. They lie on the size-magnitude and σ-luminosity relations for previously studied dwarf galaxies. Just six (15 ± 5.7 per cent) are found to have rotation speeds ve, rot 5 × 1010 M⊙), supporting the hypothesis that galaxy-galaxy or galaxy-group interactions quench star formation in low-mass galaxies. The local bright galaxy density for our sample is ρproj = 8.2 ± 2.0 Mpc-2, compared to ρproj = 2.1 ± 0.4 Mpc-2 for a star-forming comparison sample, confirming that the quenched low-mass galaxies are preferentially found in higher density environments.

  15. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  16. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  17. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  18. Chlorine dioxide is a size-selective antimicrobial agent.

    Directory of Open Access Journals (Sweden)

    Zoltán Noszticzius

    Full Text Available BACKGROUND / AIMS: ClO2, the so-called "ideal biocide", could also be applied as an antiseptic if it was understood why the solution killing microbes rapidly does not cause any harm to humans or to animals. Our aim was to find the source of that selectivity by studying its reaction-diffusion mechanism both theoretically and experimentally. METHODS: ClO2 permeation measurements through protein membranes were performed and the time delay of ClO2 transport due to reaction and diffusion was determined. To calculate ClO2 penetration depths and estimate bacterial killing times, approximate solutions of the reaction-diffusion equation were derived. In these calculations evaporation rates of ClO2 were also measured and taken into account. RESULTS: The rate law of the reaction-diffusion model predicts that the killing time is proportional to the square of the characteristic size (e.g. diameter of a body, thus, small ones will be killed extremely fast. For example, the killing time for a bacterium is on the order of milliseconds in a 300 ppm ClO2 solution. Thus, a few minutes of contact time (limited by the volatility of ClO2 is quite enough to kill all bacteria, but short enough to keep ClO2 penetration into the living tissues of a greater organism safely below 0.1 mm, minimizing cytotoxic effects when applying it as an antiseptic. Additional properties of ClO2, advantageous for an antiseptic, are also discussed. Most importantly, that bacteria are not able to develop resistance against ClO2 as it reacts with biological thiols which play a vital role in all living organisms. CONCLUSION: Selectivity of ClO2 between humans and bacteria is based not on their different biochemistry, but on their different size. We hope initiating clinical applications of this promising local antiseptic.

  19. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  20. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  1. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Mineral Composition of Selected Serbian Propolis Samples

    Directory of Open Access Journals (Sweden)

    Tosic Snezana

    2017-06-01

    Full Text Available The aim of this work was to determine the content of 22 macro- and microelements in ten raw Serbian propolis samples which differ in geographical and botanical origin as well as in polluted agent contents by atomic emission spectrometry with inductively coupled plasma (ICP-OES. The macroelements were more common and present Ca content was the highest while Na content the lowest. Among the studied essential trace elements Fe was the most common element. The levels of toxic elements (Pb, Cd, As and Hg were also analyzed, since they were possible environmental contaminants that could be transferred into propolis products for human consumption. As and Hg were not detected in any of the analyzed samples but a high level of Pb (2.0-9.7 mg/kg was detected and only selected portions of raw propolis could be used to produce natural medicines and dietary supplements for humans. Obtained results were statistically analyzed, and the examined samples showed a wide range of element content.

  3. Optimal sample size for predicting viability of cabbage and radish seeds based on near infrared spectra of single seeds

    DEFF Research Database (Denmark)

    Shetty, Nisha; Min, Tai-Gi; Gislum, René

    2011-01-01

    The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub......-sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both cabbage...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...

  4. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  6. Portion, package or tableware size for changing selection and consumption of food, alcohol and tobacco.

    Science.gov (United States)

    Hollands, Gareth J; Shemilt, Ian; Marteau, Theresa M; Jebb, Susan A; Lewis, Hannah B; Wei, Yinghui; Higgins, Julian P T; Ogilvie, David

    2015-09-14

    Overeating and harmful alcohol and tobacco use have been linked to the aetiology of various non-communicable diseases, which are among the leading global causes of morbidity and premature mortality. As people are repeatedly exposed to varying sizes and shapes of food, alcohol and tobacco products in environments such as shops, restaurants, bars and homes, this has stimulated public health policy interest in product size and shape as potential targets for intervention. 1) To assess the effects of interventions involving exposure to different sizes or sets of physical dimensions of a portion, package, individual unit or item of tableware on unregulated selection or consumption of food, alcohol or tobacco products in adults and children.2) To assess the extent to which these effects may be modified by study, intervention and participant characteristics. We searched CENTRAL, MEDLINE, EMBASE, PsycINFO, eight other published or grey literature databases, trial registries and key websites up to November 2012, followed by citation searches and contacts with study authors. This original search identified eligible studies published up to July 2013, which are fully incorporated into the review. We conducted an updated search up to 30 January 2015 but further eligible studies are not yet fully incorporated due to their minimal potential to change the conclusions. Randomised controlled trials with between-subjects (parallel-group) or within-subjects (cross-over) designs, conducted in laboratory or field settings, in adults or children. Eligible studies compared at least two groups of participants, each exposed to a different size or shape of a portion of a food (including non-alcoholic beverages), alcohol or tobacco product, its package or individual unit size, or of an item of tableware used to consume it, and included a measure of unregulated selection or consumption of food, alcohol or tobacco. We applied standard Cochrane methods to select eligible studies for inclusion and

  7. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  8. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  9. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  10. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  11. Raman spectroscopic identification of size-selected airborne particles for quantitative exposure assessment

    International Nuclear Information System (INIS)

    Steer, Brian; Gorbunov, Boris; Price, Mark C; Podoleanu, Adrian

    2016-01-01

    In this paper we present a method for the quantification of chemically distinguished airborne particulate matter, required for health risk assessment. Rather than simply detecting chemical compounds in a sample, we demonstrate an approach for the quantification of exposure to airborne particles and nanomaterials. In line with increasing concerns over the proliferation of engineered particles we consider detection of synthetically produced ZnO crystals. A multi-stage approach is presented whereby the particles are first aerodynamically size segregated from a lab-generated single component aerosol in an impaction sampler. These size fractionated samples are subsequently analysed by Raman spectroscopy. Imaging analysis is applied to Raman spatial maps to provide chemically specific quantification of airborne exposure against background which is critical for health risk evaluation of exposure to airborne particles. Here we present a first proof-of-concept study of the methodology utilising particles in the 2–4 μm aerodynamic diameter range to allow for validation of the approach by comparison to optical microscopy. The results show that the combination of these techniques provides independent size and chemical discrimination of particles. Thereby a method is provided to allow quantitative and chemically distinguished measurements of aerosol concentrations separated into exposure relevant size fractions. (paper)

  12. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  13. How is Size Related to Profitability? Post-Consolidation Evidence from Selected Banks in Nigeria

    Directory of Open Access Journals (Sweden)

    Funso T. Kolapo

    2016-10-01

    Full Text Available It is theoretically believed that increase in firm size would result to increase in firm profitability. Therefore, this study examines the relationship between size and profitability of six banks in Nigeria after the 2005 consolidation exercise. The measure of profitability is return on assets. Employing the static panel data regression method, the study found that size has an insignificant negative relationship with bank profitability. This study concludes that the 2005 consolidation exercise did not enhance the profitability of the selected banks.

  14. Effect of a size-selective biomanipulation on nutrient release by gizzard shad in Florida (USA lakes

    Directory of Open Access Journals (Sweden)

    Schaus M.H.

    2013-11-01

    Full Text Available Although fish removal for biomanipulation is often highly size-selective, our understanding of the nutrient cycling effects of this size-selection is poor. To better understand these effects, we measured nutrient excretion by gizzard shad (Dorosoma cepedianum of differing sizes from four central Florida (USA lakes and combined these measures with gillnet biomass and size-structure data to compare lake-wide effects among lakes and years. Direct removal of P in fish tissue ranged from 0.16−1.00 kg·P·ha-1·yr-1. The estimated reduction in P excretion due to harvest ranged from 30.8−202.5 g·P·ha-1·month-1, with effects strongly tied to the biomass and size structure harvested. The amount of P release prevented per kg of fish removed was lower in previously unharvested lakes, due to the initial removal of larger fish with lower mass-specific excretion rates. Gill net mesh size impacted the size distribution of harvested fish, with smaller fish that excrete more P per gram being more vulnerable to smaller mesh sizes. In Lake Apopka, decreasing the mesh size by 1.3 cm yielded P excretion reductions that were 10.7−15.1% larger. Fish harvesting to reduce internal nutrient cycling can be most effective by increasing total harvest and by harvesting smaller size classes over multiple years.

  15. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  16. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  17. Crystallite size distribution of clay minerals from selected Serbian clay deposits

    Directory of Open Access Journals (Sweden)

    Simić Vladimir

    2006-01-01

    Full Text Available The BWA (Bertaut-Warren-Averbach technique for the measurement of the mean crystallite thickness and thickness distributions of phyllosilicates was applied to a set of kaolin and bentonite minerals. Six samples of kaolinitic clays, one sample of halloysite, and five bentonite samples from selected Serbian deposits were analyzed. These clays are of sedimentary volcano-sedimentary (diagenetic, and hydrothermal origin. Two different types of shape of thickness distribution were found - lognormal, typical for bentonite and halloysite, and polymodal, typical for kaolinite. The mean crystallite thickness (T BWA seams to be influenced by the genetic type of the clay sample.

  18. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  19. Liquidity Determinants of the Selected Banking Sectors and their Size Groups

    Directory of Open Access Journals (Sweden)

    Jana Laštůvková

    2016-01-01

    Full Text Available The article focuses on the factors affecting the liquidity of selected bank sectors, as well as their size groups, using panel regression analysis. For higher complexity of the results, multiple dependent variables are used: liquidity creation, outflow and net change. The values are calculated based on the specific method of liquidity risk measurement – gross liquidity flows. The results indicate both multiple effects of some factors on the given variables, as well as isolated influence of factors on a single liquidity form or size group. Thus, when looking for determinants using just one form of liquidity, such as creation, the results need not necessarily comprehensively show the influence of the given factors, and can lead to erroneous conclusions. The results also point to the differing behaviours of the size groups and their different sensitivity on the factors; smaller banks have shown higher sensitivity on macroeconomic variables. Higher flexibility in regulation could lead to higher optimization.

  20. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  1. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  2. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  4. Combining multiple hypothesis testing and affinity propagation clustering leads to accurate, robust and sample size independent classification on gene expression data

    Directory of Open Access Journals (Sweden)

    Sakellariou Argiris

    2012-10-01

    Full Text Available Abstract Background A feature selection method in microarray gene expression data should be independent of platform, disease and dataset size. Our hypothesis is that among the statistically significant ranked genes in a gene list, there should be clusters of genes that share similar biological functions related to the investigated disease. Thus, instead of keeping N top ranked genes, it would be more appropriate to define and keep a number of gene cluster exemplars. Results We propose a hybrid FS method (mAP-KL, which combines multiple hypothesis testing and affinity propagation (AP-clustering algorithm along with the Krzanowski & Lai cluster quality index, to select a small yet informative subset of genes. We applied mAP-KL on real microarray data, as well as on simulated data, and compared its performance against 13 other feature selection approaches. Across a variety of diseases and number of samples, mAP-KL presents competitive classification results, particularly in neuromuscular diseases, where its overall AUC score was 0.91. Furthermore, mAP-KL generates concise yet biologically relevant and informative N-gene expression signatures, which can serve as a valuable tool for diagnostic and prognostic purposes, as well as a source of potential disease biomarkers in a broad range of diseases. Conclusions mAP-KL is a data-driven and classifier-independent hybrid feature selection method, which applies to any disease classification problem based on microarray data, regardless of the available samples. Combining multiple hypothesis testing and AP leads to subsets of genes, which classify unknown samples from both, small and large patient cohorts with high accuracy.

  5. 40 CFR 761.247 - Sample site selection for pipe segment removal.

    Science.gov (United States)

    2010-07-01

    ... end of the pipe segment. (3) If the pipe segment is cut with a saw or other mechanical device, take..., take samples from a total of seven segments. (A) Sample the first and last segments removed. (B) Select... total length for purposes of disposal, take samples of each segment that is 1/2 mile distant from the...

  6. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  7. Sexual selection accounts for the geographic reversal of sexual size dimorphism in the dung fly, sepsis punctum (Diptera: Sepsidae).

    Science.gov (United States)

    Puniamoorthy, Nalini; Schäfer, Martin A; Blanckenhorn, Wolf U

    2012-07-01

    Sexual size dimorphism (SSD) varies widely across and within species. The differential equilibrium model of SSD explains dimorphism as the evolutionary outcome of consistent differences in natural and sexual selection between the sexes. Here, we comprehensively examine a unique cross-continental reversal in SSD in the dung fly, Sepsis punctum. Using common garden laboratory experiments, we establish that SSD is male-biased in Europe and female-biased in North America. When estimating sexual (pairing success) and fecundity selection (clutch size of female partner) on males under three operational sex ratios (OSRs), we find that the intensity of sexual selection is significantly stronger in European versus North American populations, increasing with male body size and OSR in the former only. Fecundity selection on female body size also increases strongly with egg number and weakly with egg volume, however, equally on both continents. Finally, viability selection on body size in terms of intrinsic (physiological) adult life span in the laboratory is overall nil and does not vary significantly across all seven populations. Although it is impossible to prove causality, our results confirm the differential equilibrium model of SSD in that differences in sexual selection intensity account for the reversal in SSD in European versus North American populations, presumably mediating the ongoing speciation process in S. punctum. © 2012 The Author(s).

  8. High sintering resistance of size-selected platinum cluster catalysts by suppressed ostwald ripening

    DEFF Research Database (Denmark)

    Wettergren, Kristina; Schweinberger, Florian F.; Deiana, Davide

    2014-01-01

    on different supports exhibit remarkable intrinsic sintering resistance even under reaction conditions. The observed stability is related to suppression of Ostwald ripening by elimination of its main driving force via size-selection. This study thus constitutes a general blueprint for the rational design...... of sintering resistant catalyst systems and for efficient experimental strategies to determine sintering mechanisms. Moreover, this is the first systematic experimental investigation of sintering processes in nanoparticle systems with an initially perfectly monomodal size distribution under ambient conditions....

  9. Effect of netting direction and number of meshes around on size selection in the codend for Baltic cod (Gadus morhua)

    DEFF Research Database (Denmark)

    Wienbeck, Harald; Herrmann, Bent; Moderhak, Waldemar

    2011-01-01

    We investigated experimentally the effect that turning the netting direction 90° (T90) and halving the number of meshes around in the circumference in a diamond mesh codend had on size selection of Baltic cod. The results generally agreed with predictions of a previous simulation-based study. Both...... modifications had a significant positive effect on the size selection of cod. The best selection results were obtained for a codend in which both factors were applied together. For that codend, very little between-haul variation in cod size selection was detected, especially compared to the reference codend...

  10. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  11. Size-selective sorting in bubble streaming flows: Particle migration on fast time scales

    Science.gov (United States)

    Thameem, Raqeeb; Rallabandi, Bhargav; Hilgenfeldt, Sascha

    2015-11-01

    Steady streaming from ultrasonically driven microbubbles is an increasingly popular technique in microfluidics because such devices are easily manufactured and generate powerful and highly controllable flows. Combining streaming and Poiseuille transport flows allows for passive size-sensitive sorting at particle sizes and selectivities much smaller than the bubble radius. The crucial particle deflection and separation takes place over very small times (milliseconds) and length scales (20-30 microns) and can be rationalized using a simplified geometric mechanism. A quantitative theoretical description is achieved through the application of recent results on three-dimensional streaming flow field contributions. To develop a more fundamental understanding of the particle dynamics, we use high-speed photography of trajectories in polydisperse particle suspensions, recording the particle motion on the time scale of the bubble oscillation. Our data reveal the dependence of particle displacement on driving phase, particle size, oscillatory flow speed, and streaming speed. With this information, the effective repulsive force exerted by the bubble on the particle can be quantified, showing for the first time how fast, selective particle migration is effected in a streaming flow. We acknowledge support by the National Science Foundation under grant number CBET-1236141.

  12. Induced Effects on Red Imported Fire Ant (Hymenoptera: Formicidae) Forager Size Ratios by Pseudacteon spp. (Diptera: Phoridae): Implications on Bait Size Selection.

    Science.gov (United States)

    Reed, J J; Puckett, R T; Gold, R E

    2015-10-01

    Red imported fire ants, Solenopsis invicta Buren, are adversely affected by phorid flies in the genus Pseudacteon by instigating defensive behaviors in their hosts, and in turn reducing the efficiency of S. invicta foraging. Multiple Pseudacteon species have been released in Texas, and research has been focused on the establishment and spread of these introduced biological control agents. Field experiments were conducted to determine bait particle size selection of S. invicta when exposed to phorid populations. Four different particle sizes of two candidate baits were offered to foragers (one provided by a pesticide manufacturer, and a laboratory-created bait). Foragers selectively were attracted to, and removed more 1-1.4-mm particles than any other bait size. The industry-provided bait is primarily made of particles in the 1.4-2.0 mm size, larger than what was selected by the ants in this study. While there was a preference for foragers to be attracted to and rest on the industry-provided blank bait, S. invicta removed more of the laboratory-created bait from the test vials. There was an abundance of workers with head widths ranging from 0.5-0.75 mm collected from baits. This was dissimilar from a previous study wherein phorid flies were not active and in which large workers were collected in higher abundance at the site. This implies that phorid fly activity caused a shift for red imported fire ant colonies to have fewer large foragers. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  14. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  15. Body Size Preference of Marine Animals in Relation to Extinction Selectivity

    Science.gov (United States)

    Sriram, A.; Idgunji, S.; Heim, N. A.; Payne, J.

    2014-12-01

    comparisons showed that mean size decreased across the extinction boundary. This was due to the fact that new originating genera were smaller than the genera that survived. Our results show that there is variability in the relationship between body size and extinction selectivity in various mass extinctions.

  16. Seasonal variation in soil seed bank size and species composition of selected habitat types in Maputaland, South Africa

    Directory of Open Access Journals (Sweden)

    M. J. S. Kellerman

    2007-08-01

    Full Text Available Seasonal variation in seed bank size and species composition of five selected habitat types within the Tembe Elephant Park. South Africa, was investigated. At three-month intervals, soil samples were randomly collected from five different habitat types: a, Licuati forest; b, Licuati thicket; c, a bare or sparsely vegetated zone surrounding the forest edge, referred to as the forest/grassland ecotone; d, grassland; and e, open woodland. Most species in the seed bank flora were either grasses, sedges, or forbs, with hardly any evidence of woody species. The Licuati forest and thicket soils produced the lowest seed densities in all seasons.  Licuati forest and grassland seed banks showed a two-fold seasonal variation in size, those of the Licuati thicket and woodland a three-fold variation in size, whereas the forest/grassland ecotone maintained a relatively large seed bank all year round. The woodland seed bank had the highest species richness, whereas the Licuati forest and thicket soils were poor in species. Generally, it was found that the greatest correspondence in species composition was between the Licuati forest and thicket, as well as the forest/grassland ecotone and grassland seed bank floras.

  17. Size- and charge selectivity of glomerular filtration in Type 1 (insulin-dependent) diabetic patients with and without albuminuria

    DEFF Research Database (Denmark)

    Deckert, T; Kofoed-Enevoldsen, A; Vidal, P

    1993-01-01

    Albuminuria is the first clinical event in the development of diabetic nephropathy. We assessed glomerular charge- and size selectivity in 51 patients with Type 1 (insulin-dependent) diabetes mellitus of juvenile onset and 11 healthy individuals. Patients were allocated to five groups. The urinary...... techniques and tubular protein reabsorption by excretion of beta 2-microglobulin. Charge selectivity was estimated from the IgG/IgG4 selectivity index. Size selectivity was measured by dextran clearance. Dextran was measured by refractive index detection after fractionation (2 A fractions in the range 26...... macromolecular pathways in the development of diabetic nephropathy....

  18. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  19. Hierarchical modeling of cluster size in wildlife surveys

    Science.gov (United States)

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  20. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  1. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  2. Artificial selection on relative brain size in the guppy reveals costs and benefits of evolving a larger brain.

    Science.gov (United States)

    Kotrschal, Alexander; Rogell, Björn; Bundsen, Andreas; Svensson, Beatrice; Zajitschek, Susanne; Brännström, Ioana; Immler, Simone; Maklakov, Alexei A; Kolm, Niclas

    2013-01-21

    The large variation in brain size that exists in the animal kingdom has been suggested to have evolved through the balance between selective advantages of greater cognitive ability and the prohibitively high energy demands of a larger brain (the "expensive-tissue hypothesis"). Despite over a century of research on the evolution of brain size, empirical support for the trade-off between cognitive ability and energetic costs is based exclusively on correlative evidence, and the theory remains controversial. Here we provide experimental evidence for costs and benefits of increased brain size. We used artificial selection for large and small brain size relative to body size in a live-bearing fish, the guppy (Poecilia reticulata), and found that relative brain size evolved rapidly in response to divergent selection in both sexes. Large-brained females outperformed small-brained females in a numerical learning assay designed to test cognitive ability. Moreover, large-brained lines, especially males, developed smaller guts, as predicted by the expensive-tissue hypothesis, and produced fewer offspring. We propose that the evolution of brain size is mediated by a functional trade-off between increased cognitive ability and reproductive performance and discuss the implications of these findings for vertebrate brain evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  4. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  5. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  6. Measurement of radioactivity in the environment - Soil - Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided

  7. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  8. Techniques used by United Kingdom consultant plastic surgeons to select implant size for primary breast augmentation.

    Science.gov (United States)

    Holmes, W J M; Timmons, M J; Kauser, S

    2015-10-01

    Techniques used to estimate implant size for primary breast augmentation have evolved since the 1970s. Currently no consensus exists on the optimal method to select implant size for primary breast augmentation. In 2013 we asked United Kingdom consultant plastic surgeons who were full members of BAPRAS or BAAPS what was their technique for implant size selection for primary aesthetic breast augmentation. We also asked what was the range of implant sizes they commonly used. The answers to question one were grouped into four categories: experience, measurements, pre-operative external sizers and intra-operative sizers. The response rate was 46% (164/358). Overall, 95% (153/159) of all respondents performed some form of pre-operative assessment, the others relied on "experience" only. The most common technique for pre-operative assessment was by external sizers (74%). Measurements were used by 57% of respondents and 3% used intra-operative sizers only. A combination of measurements and sizers was used by 34% of respondents. The most common measurements were breast base (68%), breast tissue compliance (19%), breast height (15%), and chest diameter (9%). The median implant size commonly used in primary breast augmentation was 300cc. Pre-operative external sizers are the most common technique used by UK consultant plastic surgeons to select implant size for primary breast augmentation. We discuss the above findings in relation to the evolution of pre-operative planning techniques for breast augmentation. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. PeptideManager: A Peptide Selection Tool for Targeted Proteomic Studies Involving Mixed Samples from Different Species

    Directory of Open Access Journals (Sweden)

    Kevin eDemeure

    2014-09-01

    Full Text Available The search for clinically useful protein biomarkers using advanced mass spectrometry approaches represents a major focus in cancer research. However, the direct analysis of human samples may be challenging due to limited availability, the absence of appropriate control samples, or the large background variability observed in patient material. As an alternative approach, human tumors orthotopically implanted into a different species (xenografts are clinically relevant models that have proven their utility in pre-clinical research. Patient derived xenografts for glioblastoma have been extensively characterized in our laboratory and have been shown to retain the characteristics of the parental tumor at the phenotypic and genetic level. Such models were also found to adequately mimic the behavior and treatment response of human tumors. The reproducibility of such xenograft models, the possibility to identify their host background and perform tumor-host interaction studies, are major advantages over the direct analysis of human samples.At the proteome level, the analysis of xenograft samples is challenged by the presence of proteins from two different species which, depending on tumor size, type or location, often appear at variable ratios. Any proteomics approach aimed at quantifying proteins within such samples must consider the identification of species specific peptides in order to avoid biases introduced by the host proteome. Here, we present an in-house methodology and tool developed to select peptides used as surrogates for protein candidates from a defined proteome (e.g., human in a host proteome background (e.g., mouse, rat suited for a mass spectrometry analysis. The tools presented here are applicable to any species specific proteome, provided a protein database is available. By linking the information from both proteomes, PeptideManager significantly facilitates and expedites the selection of peptides used as surrogates to analyze

  10. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  11. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  12. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  13. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  14. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    Science.gov (United States)

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  15. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  16. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  17. An integrated approach for determining the size of hardwood group-selection openings

    Science.gov (United States)

    Chris B. LeDoux

    1999-01-01

    The use of group-selection methods is becoming more widespread as landowners and forest managers attempt to respond to public pressure to reduce the size of clearcut blocks. Several studies have shown that harvesting timber in smaller groups or clumps increases the cost of operations for both cable and ground-based logging systems. Recent regeneration studies have...

  18. An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization

    Directory of Open Access Journals (Sweden)

    Shibli Nisar

    2016-01-01

    Full Text Available Short Time Fourier Transform (STFT is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT. Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection.

  19. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  20. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  1. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  2. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  3. SEM analysis of particle size during conventional treatment of CMP process wastewater

    International Nuclear Information System (INIS)

    Roth, Gary A.; Neu-Baker, Nicole M.; Brenner, Sara A.

    2015-01-01

    Engineered nanomaterials (ENMs) are currently employed by many industries and have different physical and chemical properties from their bulk counterparts that may confer different toxicity. Nanoparticles used or generated in semiconductor manufacturing have the potential to enter the municipal waste stream via wastewater and their ultimate fate in the ecosystem is currently unknown. This study investigates the fate of ENMs used in chemical mechanical planarization (CMP), a polishing process repeatedly utilized in semiconductor manufacturing. Wastewater sampling was conducted throughout the wastewater treatment (WWT) process at the fabrication plant's on-site wastewater treatment facility. The goal of this study was to assess whether the WWT processes resulted in size-dependent filtration of particles in the nanoscale regime by analyzing samples using scanning electron microscopy (SEM). Statistical analysis demonstrated no significant differences in particle size between sampling points, indicating low or no selectivity of WWT methods for nanoparticles based on size. All nanoparticles appeared to be of similar morphology (near-spherical), with a high variability in particle size. EDX verified nanoparticles composition of silicon- and/or aluminum-oxide. Nanoparticle sizing data compared between sampling points, including the final sampling point before discharge from the facility, suggested that nanoparticles could be released to the municipal waste stream from industrial sources. - Highlights: • The discrete treatments of a semiconductor wastewater treatment system were examined. • A sampling scheme and method for analyzing nanoparticles in wastewater was devised. • The wastewater treatment process studied is not size-selective for nanoparticles

  4. Determination of Selected Polycyclic Aromatic Compounds in Particulate Matter Samples with Low Mass Loading: An Approach to Test Method Accuracy

    Directory of Open Access Journals (Sweden)

    Susana García-Alonso

    2017-01-01

    Full Text Available A miniaturized analytical procedure to determine selected polycyclic aromatic compounds (PACs in low mass loadings (<10 mg of particulate matter (PM is evaluated. The proposed method is based on a simple sonication/agitation method using small amounts of solvent for extraction. The use of a reduced sample size of particulate matter is often limiting for allowing the quantification of analytes. This also leads to the need for changing analytical procedures and evaluating its performance. The trueness and precision of the proposed method were tested using ambient air samples. Analytical results from the proposed method were compared with those of pressurized liquid and microwave extractions. Selected PACs (polycyclic aromatic hydrocarbons (PAHs and nitro polycyclic aromatic hydrocarbons (NPAHs were determined by liquid chromatography with fluorescence detection (HPLC/FD. Taking results from pressurized liquid extractions as reference values, recovery rates of sonication/agitation method were over 80% for the most abundant PAHs. Recovery rates of selected NPAHs were lower. Enhanced rates were obtained when methanol was used as a modifier. Intermediate precision was estimated by data comparison from two mathematical approaches: normalized difference data and pooled relative deviations. Intermediate precision was in the range of 10–20%. The effectiveness of the proposed method was evaluated in PM aerosol samples collected with very low mass loadings (<0.2 mg during characterization studies from turbofan engine exhausts.

  5. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Charge- and Size-Selective Molecular Separation using Ultrathin Cellulose Membranes

    KAUST Repository

    Puspasari, Tiara

    2016-08-30

    To date, it is still a challenge to prepare high-flux and highselectivity microporous membranes thinner than 20 nm without introducing defects. In this work, we report for the first time the application of cellulose membranes for selective separation of small molecules. A freestanding cellulose membrane as thin as 10 nm has been prepared through regeneration of trimethylsilyl cellulose (TMSC). The freestanding membrane can be transferred to any desired substrate and shows a normalized flux as high as 700 L m−2 h−1 bar−1 when supported by a porous alumina disc. According to filtration experiments, the membrane exhibits precise size-sieving performances with an estimated pore size between 1.5–3.5 nm depending on the regeneration period and initial TMSC concentration. A perfect discrimination of anionic molecules over neutral species is demonstrated. Moreover, the membrane demonstrates high reproducibility, high scale-up potential, and excellent stability over two months.

  7. Proposal for selecting an ore sample from mining shaft under Kvanefjeld

    International Nuclear Information System (INIS)

    Lund Clausen, F.

    1979-02-01

    Uranium ore recovered from the tunnel under Kvanefjeld (Greenland) will be processed in a pilot plant. Selection of a fully representative ore sample for both the whole area and single local sites is discussed. A FORTRAN program for ore distribution is presented, in order to enable correct sampling. (EG)

  8. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  9. Effect of selective logging on genetic diversity and gene flow in Cariniana legalis sampled from a cacao agroforestry system.

    Science.gov (United States)

    Leal, J B; Santos, R P; Gaiotto, F A

    2014-01-28

    The fragments of the Atlantic Forest of southern Bahia have a long history of intense logging and selective cutting. Some tree species, such as jequitibá rosa (Cariniana legalis), have experienced a reduction in their populations with respect to both area and density. To evaluate the possible effects of selective logging on genetic diversity, gene flow, and spatial genetic structure, 51 C. legalis individuals were sampled, representing the total remaining population from the cacao agroforestry system. A total of 120 alleles were observed from the 11 microsatellite loci analyzed. The average observed heterozygosity (0.486) was less than the expected heterozygosity (0.721), indicating a loss of genetic diversity in this population. A high fixation index (FIS = 0.325) was found, which is possibly due to a reduction in population size, resulting in increased mating among relatives. The maximum (1055 m) and minimum (0.095 m) distances traveled by pollen or seeds were inferred based on paternity tests. We found 36.84% of unique parents among all sampled seedlings. The progenitors of the remaining seedlings (63.16%) were most likely out of the sampled area. Positive and significant spatial genetic structure was identified in this population among classes 10 to 30 m away with an average coancestry coefficient between pairs of individuals of 0.12. These results suggest that the agroforestry system of cacao cultivation is contributing to maintaining levels of diversity and gene flow in the studied population, thus minimizing the effects of selective logging.

  10. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  11. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  13. Channel Islands, Kelp Forest Monitoring, Size and Frequency, Natural Habitat, 1985-2007

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset has measurements of the size of selected animal species at selected locations in the Channel Islands National Park. Sampling is conducted annually...

  14. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  15. Effects of state and trait anxiety on selective attention to threatening stimuli in a non-clinical sample of school children

    Directory of Open Access Journals (Sweden)

    Jeniffer Ortega Marín

    2015-01-01

    Full Text Available Attentional biases, consisting of a preferential processing of threatening stimuli, have been found in anxious adults as predicted by several cognitive models. However, studies with non-clinical samples of children have provided mixed results. therefore, the aim of this research was to determine the effects of state and trait anxiety on the selective attention towards threatening stimuli in a non-clinical sample of school children (age: 8 to 13, n = 110 using the dot-probe task. This study did not reveal an effect of trait anxiety on selective attention towards threatening stimuli. However, a significant difference was found between participants with low state anxiety and high state anxiety. Nevertheless, the effect size was small. Specifically, participants with low state anxiety showed a bias towards threatening stimuli. Overall, the findings of this research with a non-clinical sample of school children suggest that attentional biases towards threatening information, which has been repeatedly found in anxious adults, are not necessarily inherent to non-clinical anxiety in children and on the other hand, the relationship between attentional biases and anxiety in this population might be moderated by other cognitive processes.

  16. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  17. Arrays of Size-Selected Metal Nanoparticles Formed by Cluster Ion Beam Technique

    DEFF Research Database (Denmark)

    Ceynowa, F. A.; Chirumamilla, Manohar; Zenin, Volodymyr

    2018-01-01

    Deposition of size-selected copper and silver nanoparticles (NPs) on polymers using cluster beam technique is studied. It is shown that ratio of particle embedment in the film can be controlled by simple thermal annealing. Combining electron beam lithography, cluster beam deposition, and heat...... with required configurations which can be applied for wave-guiding, resonators, in sensor technologies, and surface enhanced Raman scattering....

  18. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  19. Supra-annular structure assessment for self-expanding transcatheter heart valve size selection in patients with bicuspid aortic valve.

    Science.gov (United States)

    Liu, Xianbao; He, Yuxin; Zhu, Qifeng; Gao, Feng; He, Wei; Yu, Lei; Zhou, Qijing; Kong, Minjian; Wang, Jian'an

    2018-04-01

    To explore assessment of supra-annular structure for self-expanding transcatheter heart valve (THV) size selection in patients with bicuspid aortic stenosis (AS). Annulus-based device selection from CT measurement is the standard sizing strategy for tricuspid aortic valve before transcatheter aortic valve replacement (TAVR). Because of supra-annular deformity, device selection for bicuspid AS has not been systemically studied. Twelve patients with bicuspid AS who underwent TAVR with self-expanding THVs were included in this study. To assess supra-annular structure, sequential balloon aortic valvuloplasty was performed in every 2 mm increments until waist sign occurred with less than mild regurgitation. Procedural results and 30 day follow-up outcomes were analyzed. Seven patients (58.3%) with 18 mm; three patients (25%) with sequential 18 mm, 20 mm; and only two patients (16.7%) with sequential 18 mm, 20 mm, and 22 mm balloon sizing were performed, respectively. According to the results of supra-annular assessment, a smaller device size (91.7%) was selected in all but one patient compared with annulus based sizing strategy, and the outcomes were satisfactory with 100% procedural success. No mortality and 1 minor stroke were observed at 30 d follow-up. The percentage of NYHA III/IV decreased from 83.3% (9/12) to 16.7% (2/12). No new permanent pacemaker implantation and no moderate or severe paravalvular leakage were found. A supra-annular structure based sizing strategy is feasible for TAVR in patients with bicuspid AS. © 2018 The Authors Catheterization and Cardiovascular Interventions Published by Wiley Periodicals, Inc.

  20. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  1. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  2. Application of porous foams for size-selective measurements of airborne wheat allergen

    NARCIS (Netherlands)

    Bogdanovic, J.; Pater, A.J. de; Doekes, G.; Wouters, I.M.; Heederik, D.J.J.

    2006-01-01

    Background: Exposure to airborne wheat allergen is a well-known cause of bakers' allergy and asthma. Airborne wheat allergen can be measured by enzyme immunoassays (EIAs) in extracts of inhalable dust samples, but only limited knowledge is available on the size distribution of wheat

  3. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  4. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  5. Selection Component Analysis of Natural Polymorphisms using Population Samples Including Mother-Offspring Combinations, II

    DEFF Research Database (Denmark)

    Jarmer, Hanne Østergaard; Christiansen, Freddy Bugge

    1981-01-01

    Population samples including mother-offspring combinations provide information on the selection components: zygotic selection, sexual selection, gametic seletion and fecundity selection, on the mating pattern, and on the deviation from linkage equilibrium among the loci studied. The theory...

  6. Influence of crystallite size and shape of zeolite ZSM-22 on its activity and selectivity in the catalytic cracking of n-octane

    Energy Technology Data Exchange (ETDEWEB)

    Bager, F.; Ernst, S. [Kaiserslautern Univ. (Germany). Dept. of Chemistry, Chemical Technology

    2013-11-01

    Light olefins belong to the major building blocks for the petrochemical industry, particularly for the production of polymers. It has become necessary to increase the production of light olefins specifically in the case for propene with so called 'on-purpose propene' technologies. One possible route is to increase the amount of propene that can be obtained from Fluid Catalytic Cracking (FCC) by optimizing the catalyst through introducing new additives, which offer a high selectivity to propene. Zeolite ZSM-22 samples with different crystallite sizes and morphologies have been synthesized via hydrothermal syntheses and characterized by powder X-Ray diffraction, nitrogen physisorption, atomic absorption spectroscopy, scanning electron microscopy and solid-state NMR spectroscopy. The zeolites in the Broensted-acid form have been tested as catalysts in the catalytic cracking of n-octane as a model hydrocarbon. Clear influences of the crystallite size on the deactivation behavior have been observed. Larger crystals of zeolite ZSM-22 produce an increased amount of coke deposits resulting in a faster deactivation of the catalyst. The experimental results suggest that there is probably some influence of pore diffusion on the catalytic activity of the ZSM-22 sample with the large crystallite size. However a noticeable influence on the general product distribution could not be observed. (orig.)

  7. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  8. Data Quality Objectives For Selecting Waste Samples To Test The Fluid Bed Steam Reformer Test

    International Nuclear Information System (INIS)

    Banning, D.L.

    2010-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required. The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.

  9. The extended Price equation quantifies species selection on mammalian body size across the Palaeocene/Eocene Thermal Maximum.

    Science.gov (United States)

    Rankin, Brian D; Fox, Jeremy W; Barrón-Ortiz, Christian R; Chew, Amy E; Holroyd, Patricia A; Ludtke, Joshua A; Yang, Xingkai; Theodor, Jessica M

    2015-08-07

    Species selection, covariation of species' traits with their net diversification rates, is an important component of macroevolution. Most studies have relied on indirect evidence for its operation and have not quantified its strength relative to other macroevolutionary forces. We use an extension of the Price equation to quantify the mechanisms of body size macroevolution in mammals from the latest Palaeocene and earliest Eocene of the Bighorn and Clarks Fork Basins of Wyoming. Dwarfing of mammalian taxa across the Palaeocene/Eocene Thermal Maximum (PETM), an intense, brief warming event that occurred at approximately 56 Ma, has been suggested to reflect anagenetic change and the immigration of small bodied-mammals, but might also be attributable to species selection. Using previously reconstructed ancestor-descendant relationships, we partitioned change in mean mammalian body size into three distinct mechanisms: species selection operating on resident mammals, anagenetic change within resident mammalian lineages and change due to immigrants. The remarkable decrease in mean body size across the warming event occurred through anagenetic change and immigration. Species selection also was strong across the PETM but, intriguingly, favoured larger-bodied species, implying some unknown mechanism(s) by which warming events affect macroevolution. © 2015 The Author(s).

  10. Approaches to sampling and case selection in qualitative research: examples in the geography of health.

    Science.gov (United States)

    Curtis, S; Gesler, W; Smith, G; Washburn, S

    2000-04-01

    This paper focuses on the question of sampling (or selection of cases) in qualitative research. Although the literature includes some very useful discussions of qualitative sampling strategies, the question of sampling often seems to receive less attention in methodological discussion than questions of how data is collected or is analysed. Decisions about sampling are likely to be important in many qualitative studies (although it may not be an issue in some research). There are varying accounts of the principles applicable to sampling or case selection. Those who espouse 'theoretical sampling', based on a 'grounded theory' approach, are in some ways opposed to those who promote forms of 'purposive sampling' suitable for research informed by an existing body of social theory. Diversity also results from the many different methods for drawing purposive samples which are applicable to qualitative research. We explore the value of a framework suggested by Miles and Huberman [Miles, M., Huberman,, A., 1994. Qualitative Data Analysis, Sage, London.], to evaluate the sampling strategies employed in three examples of research by the authors. Our examples comprise three studies which respectively involve selection of: 'healing places'; rural places which incorporated national anti-malarial policies; young male interviewees, identified as either chronically ill or disabled. The examples are used to show how in these three studies the (sometimes conflicting) requirements of the different criteria were resolved, as well as the potential and constraints placed on the research by the selection decisions which were made. We also consider how far the criteria Miles and Huberman suggest seem helpful for planning 'sample' selection in qualitative research.

  11. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  12. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  13. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    Science.gov (United States)

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  15. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  16. Size-selective predation and predator-induced life-history shifts alter the outcome of competition between planktonic grazers

    NARCIS (Netherlands)

    Hülsmann, S.; Rinke, K.; Mooij, W.M.

    2011-01-01

    1.We studied the effect of size-selective predation on the outcome of competition between two differently sized prey species in a homogenous environment. 2. Using a physiologically structured population model, we calculated equilibrium food concentrations for a range of predation scenarios defined

  17. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    Science.gov (United States)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  18. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  19. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  20. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  1. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Effect of Inoculant Alloy Selection and Particle Size on Efficiency of Isomorphic Inoculation of Ti-Al.

    Science.gov (United States)

    Kennedy, J R; Rouat, B; Daloz, D; Bouzy, E; Zollinger, J

    2018-04-25

    The process of isomorphic inoculation relies on precise selection of inoculant alloys for a given system. Three alloys, Ti-10Al-25Nb, Ti-25Al-10Ta, and Ti-47Ta (at %) were selected as potential isomorphic inoculants for a Ti-46Al alloy. The binary Ti-Ta alloy selected was found to be ineffective as an inoculant due to its large density difference with the melt, causing the particles to settle. Both ternary alloys were successfully implemented as isomorphic inoculants that decreased the equiaxed grain size and increased the equiaxed fraction in their ingots. The degree of grain refinement obtained was found to be dependent on the number of particles introduced to the melt. Also, more new grains were formed than particles added to the melt. The grains/particle efficiency varied from greater than one to nearly twenty as the size of the particle increased. This is attributed to the breaking up of particles into smaller particles by dissolution in the melt. For a given particle size, Ti-Al-Ta and Ti-Al-Nb particles were found to have a roughly similar grain/particle efficiency.

  3. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  4. New sorbent materials for selective extraction of cocaine and benzoylecgonine from human urine samples.

    Science.gov (United States)

    Bujak, Renata; Gadzała-Kopciuch, Renata; Nowaczyk, Alicja; Raczak-Gutknecht, Joanna; Kordalewska, Marta; Struck-Lewicka, Wiktoria; Waszczuk-Jankowska, Małgorzata; Tomczak, Ewa; Kaliszan, Michał; Buszewski, Bogusław; Markuszewski, Michał J

    2016-02-20

    An increase in cocaine consumption has been observed in Europe during the last decade. Benzoylecgonine, as a main urinary metabolite of cocaine in human, is so far the most reliable marker of cocaine consumption. Determination of cocaine and its metabolite in complex biological samples as urine or blood, requires efficient and selective sample pretreatment. In this preliminary study, the newly synthesized sorbent materials were proposed for selective extraction of cocaine and benzoylecgonine from urine samples. Application of these sorbent media allowed to determine cocaine and benzoylecgonine in urine samples at the concentration level of 100ng/ml with good recovery values as 81.7%±6.6 and 73.8%±4.2, respectively. The newly synthesized materials provided efficient, inexpensive and selective extraction of both cocaine and benzoylecgonine from urine samples, which can consequently lead to an increase of the sensitivity of the current available screening diagnostic tests. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Observed Characteristics and Teacher Quality: Impacts of Sample Selection on a Value Added Model

    Science.gov (United States)

    Winters, Marcus A.; Dixon, Bruce L.; Greene, Jay P.

    2012-01-01

    We measure the impact of observed teacher characteristics on student math and reading proficiency using a rich dataset from Florida. We expand upon prior work by accounting directly for nonrandom attrition of teachers from the classroom in a sample selection framework. We find evidence that sample selection is present in the estimation of the…

  6. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  7. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  8. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  9. Evolution of floral display in Eichhornia paniculata (Pontederiaceae): direct and correlated responses to selection on flower size and number.

    Science.gov (United States)

    Worley, A C; Barrett, S C

    2000-10-01

    Trade-offs between flower size and number seem likely to influence the evolution of floral display and are an important assumption of several theoretical models. We assessed floral trade-offs by imposing two generations of selection on flower size and number in a greenhouse population of bee-pollinated Eichhornia paniculata. We established a control line and two replicate selection lines of 100 plants each for large flowers (S+), small flowers (S-), and many flowers per inflorescence (N+). We compared realized heritabilities and genetic correlations with estimates based on restricted-maximum-likelihood (REML) analysis of pedigrees. Responses to selection confirmed REML heritability estimates (flower size, h2 = 0.48; daily flower number, h2 = 0.10; total flower number, h2 = 0.23). Differences in nectar, pollen, and ovule production between S+ and S- lines supported an overall divergence in investment per flower. Both realized and REML estimates of the genetic correlation between daily and total flower number were r = 1.0. However, correlated responses to selection were inconsistent in their support of a trade-off. In both S- lines, correlated increases in flower number indicated a genetic correlation of r = -0.6 between flower size and number. In contrast, correlated responses in N+ and S+ lines were not significant, although flower size decreased in one N+ line. In addition, REML estimates of genetic correlations between flower size and number were positive, and did not differ from zero when variation in leaf area and age at first flowering were taken into account. These results likely reflect the combined effects of variation in genes controlling the resources available for flowering and genes with opposing effects on flower size and number. Our results suggest that the short-term evolution of floral display is not necessarily constrained by trade-offs between flower size and number, as is often assumed.

  10. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  11. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  12. The Impact of Selection, Gene Conversion, and Biased Sampling on the Assessment of Microbial Demography.

    Science.gov (United States)

    Lapierre, Marguerite; Blin, Camille; Lambert, Amaury; Achaz, Guillaume; Rocha, Eduardo P C

    2016-07-01

    Recent studies have linked demographic changes and epidemiological patterns in bacterial populations using coalescent-based approaches. We identified 26 studies using skyline plots and found that 21 inferred overall population expansion. This surprising result led us to analyze the impact of natural selection, recombination (gene conversion), and sampling biases on demographic inference using skyline plots and site frequency spectra (SFS). Forward simulations based on biologically relevant parameters from Escherichia coli populations showed that theoretical arguments on the detrimental impact of recombination and especially natural selection on the reconstructed genealogies cannot be ignored in practice. In fact, both processes systematically lead to spurious interpretations of population expansion in skyline plots (and in SFS for selection). Weak purifying selection, and especially positive selection, had important effects on skyline plots, showing patterns akin to those of population expansions. State-of-the-art techniques to remove recombination further amplified these biases. We simulated three common sampling biases in microbiological research: uniform, clustered, and mixed sampling. Alone, or together with recombination and selection, they further mislead demographic inferences producing almost any possible skyline shape or SFS. Interestingly, sampling sub-populations also affected skyline plots and SFS, because the coalescent rates of populations and their sub-populations had different distributions. This study suggests that extreme caution is needed to infer demographic changes solely based on reconstructed genealogies. We suggest that the development of novel sampling strategies and the joint analyzes of diverse population genetic methods are strictly necessary to estimate demographic changes in populations where selection, recombination, and biased sampling are present. © The Author 2016. Published by Oxford University Press on behalf of the Society for

  13. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  14. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  15. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  16. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  17. A simulation-based attempt to quantify the morphological component of size selection of Nephrops norvegicus in trawl codends

    DEFF Research Database (Denmark)

    Frandsen, Rikke; Herrmann, Bent; Madsen, Niels

    2010-01-01

    The selectivity for Nephrops (Nephrops norvegicus) in trawl codends generally is poor and the lack of steepness of the selection curve results in high discard rates and/or loss of legal-sized catch. This poor codend selectivity often is attributed to the irregular shape of Nephrops, which to some...

  18. CURRENT CONCEPTS ON SELECTION TECHNIQEUS IN FINANCIAL AUDITING

    Directory of Open Access Journals (Sweden)

    Munteanu Ciprian

    2015-07-01

    Full Text Available The financial auditor's work evolves around the issue of an independent, professional and objective opinion on the compliance of the client's financial statements with the national accounting rules and principles. At the same time, the auditor will have to express an opinion on the ability of the company to continue its activity. An ideal situation would involve auditing all the components of the yearly accounts, but this would take time, effort and a very high cost. Fortunately, the audit team has some very useful tools for acquiring audit evidence in a fast and conclusive way - selection techniques. These techniques may be used in different phases of the audit and auditors have been using them for a long time, in fact no audit program would function without these techniques. They have become quite common as the auditors make important judgments, such as determining what type of technique to apply, whether to use statistical or nonstatistical techniques, appropriate inputs to determine sample size, and evaluation of results, particularly when errors are detected. This paper aims to theoretically present the main selection techniques, indicating how, why and when to use them. There are six selection techniques and we deal with the most frequent four of them. Our purpose is to present the characteristics and set the limits of these techniques, emphasizing sampling as the most common selection technique currently in use. A commonly held misconception about statistical sampling, for example, is that it removes the need for the use of the professional judgement. While it is true that statistical sampling uses statistical methods to determine the sample size and to select and evaluate audit samples, it is the responsibility of the auditor to consider and specify in advance factors such as materiality, the expected error rate or amount, the risk of over-reliance or the risk of incorrect acceptance, audit risk, inherent risk, control risk, standard

  19. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  20. Selection Of Suitable Particle Size And Particle Ratio For Japanese Cucumber Cucumis Sativus L. Plants

    Directory of Open Access Journals (Sweden)

    Galahitigama GAH

    2015-08-01

    Full Text Available This study was conducted to select the best particle size of coco peat for cucumber nurseries as well as best particle ratio for optimum plant growth and development of cucumber. The experiment was carried out in International Foodstuff Company and Faculty of Agriculture University of Ruhuna Sri Lanka during 2015 to 2016. Under experiment one three types of different particle sizes were used namely fine amp88040.5mm T2 medium 3mm-0.5mm T3 and coarse 4mm T4 with normal coco peat T1 as treatments. Complete Randomized Design CRD used as experimental design with five replicates. Germination percentage number of leaves per seedling seedling height in frequent day intervals was taken as growth parameters. Analysis of variance procedure was applied to analyze the data at 5 probability level. The results revealed that medium size particle media sieve size 0.5mm -3mm of coco peat was the best particle size for cucumber nursery practice when considered the physical and chemical properties of medium particles of coco peat. In the experiment of selecting of suitable particle ratio for cucumber plants the compressed mixture of coco peat particles that contain 70 ww unsieved coco peat 20 ww coarse particles and 10 ww coconut husk chips 5 12mm has given best results for growth performances compared to other treatments and cucumber grown in this mixture has shown maximum growth and yield performances.

  1. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  2. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  3. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  4. Selection based on the size of the black tie of the great tit may be reversed in urban habitats.

    Science.gov (United States)

    Senar, Juan Carlos; Conroy, Michael J; Quesada, Javier; Mateos-Gonzalez, Fernando

    2014-07-01

    A standard approach to model how selection shapes phenotypic traits is the analysis of capture-recapture data relating trait variation to survival. Divergent selection, however, has never been analyzed by the capture-recapture approach. Most reported examples of differences between urban and nonurban animals reflect behavioral plasticity rather than divergent selection. The aim of this paper was to use a capture-recapture approach to test the hypothesis that divergent selection can also drive local adaptation in urban habitats. We focused on the size of the black breast stripe (i.e., tie width) of the great tit (Parus major), a sexual ornament used in mate choice. Urban great tits display smaller tie sizes than forest birds. Because tie size is mostly genetically determined, it could potentially respond to selection. We analyzed capture/recapture data of male great tits in Barcelona city (N = 171) and in a nearby (7 km) forest (N = 324) from 1992 to 2008 using MARK. When modelling recapture rate, we found it to be strongly influenced by tie width, so that both for urban and forest habitats, birds with smaller ties were more trap-shy and more cautious than their larger tied counterparts. When modelling survival, we found that survival prospects in forest great tits increased the larger their tie width (i.e., directional positive selection), but the reverse was found for urban birds, with individuals displaying smaller ties showing higher survival (i.e., directional negative selection). As melanin-based tie size seems to be related to personality, and both are heritable, results may be explained by cautious personalities being favored in urban environments. More importantly, our results show that divergent selection can be an important mechanism in local adaptation to urban habitats and that capture-recapture is a powerful tool to test it.

  5. In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size

    Directory of Open Access Journals (Sweden)

    Stefano Schiavon

    2010-01-01

    Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.

  6. Selectively Reduced Posterior Corpus Callosum Size in a Population-Based Sample of Young Adults Born with Low Birth Weight

    DEFF Research Database (Denmark)

    Aukland, S M; Westerhausen, R; Plessen, K J

    2011-01-01

    BACKGROUND AND PURPOSE: Several studies suggest that VLBW is associated with a reduced CC size later in life. We aimed to clarify this in a prospective, controlled study of 19-year-olds, hypothesizing that those with LBWs had smaller subregions of CC than the age-matched controls, even after...... correcting for brain volume. MATERIALS AND METHODS: One hundred thirteen survivors of LBW (BW brain. The cross-sectional area of the CC (total callosal area, and the callosal subregions of the genu, truncus......, and posterior third) was measured. Callosal areas were adjusted for head size. RESULTS: The posterior third subregion of the CC was significantly smaller in individuals born with a LBW compared with controls, even after adjusting for size of the forebrain. Individuals who were born with a LBW had a smaller CC...

  7. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  8. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  9. Ethylbenzene Disproportionation on HZSM-5 Zeolite : The Effect of Aluminum Content and Crystal Size on the Selectivity for p-Diethylbenzene

    Directory of Open Access Journals (Sweden)

    Velasco N.D.

    1998-01-01

    Full Text Available The aim of this work was to verify the effect of MFI aluminum content and crystal size on the selectivity for para-diethylbenzene during ethylbenzene disproportionation. It was observed that the para-diethylbenzene selectivity increased as MFI crystal size increased. The increase in aluminum content caused a decrease in the selectivity for para-diethylbenzene. However, for crystals larger than 8 m m, the decrease in aluminum content had little influence on the selectivity for para-diethylbenzene. The results can be explained by the number of active aluminum sites on the external surface of the crystals.

  10. Evidence of market-driven size-selective fishing and the mediating effects of biological and institutional factors.

    Science.gov (United States)

    Reddy, Sheila M W; Wentz, Allison; Aburto-Oropeza, Octavio; Maxey, Martin; Nagavarapu, Sriniketh; Leslie, Heather M

    2013-06-01

    Market demand is often ignored or assumed to lead uniformly to the decline of resources. Yet little is known about how market demand influences natural resources in particular contexts, or the mediating effects of biological or institutional factors. Here, we investigate this problem by examining the Pacific red snapper (Lutjanus peru) fishery around La Paz, Mexico, where medium or "plate-sized" fish are sold to restaurants at a premium price. If higher demand for plate-sized fish increases the relative abundance of the smallest (recruit size class) and largest (most fecund) fish, this may be a market mechanism to increase stocks and fishermen's revenues. We tested this hypothesis by estimating the effect of prices on the distribution of catch across size classes using daily records of prices and catch. We linked predictions from this economic choice model to a staged-based model of the fishery to estimate the effects on the stock and revenues from harvest. We found that the supply of plate-sized fish increased by 6%, while the supply of large fish decreased by 4% as a result of a 13% price premium for plate-sized fish. This market-driven size selection increased revenues (14%) but decreased total fish biomass (-3%). However, when market-driven size selection was combined with limited institutional constraints, both fish biomass (28%) and fishermen's revenue (22%) increased. These results show that the direction and magnitude of the effects of market demand on biological populations and human behavior can depend on both biological attributes and institutional constraints. Fisheries management may capitalize on these conditional effects by implementing size-based regulations when economic and institutional incentives will enhance compliance, as in the case we describe here, or by creating compliance enhancing conditions for existing regulations.

  11. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  12. Direct observation of enhanced magnetism in individual size- and shape-selected 3 d transition metal nanoparticles

    Science.gov (United States)

    Kleibert, Armin; Balan, Ana; Yanes, Rocio; Derlet, Peter M.; Vaz, C. A. F.; Timm, Martin; Fraile Rodríguez, Arantxa; Béché, Armand; Verbeeck, Jo; Dhaka, R. S.; Radovic, Milan; Nowak, Ulrich; Nolting, Frithjof

    2017-05-01

    Magnetic nanoparticles are critical building blocks for future technologies ranging from nanomedicine to spintronics. Many related applications require nanoparticles with tailored magnetic properties. However, despite significant efforts undertaken towards this goal, a broad and poorly understood dispersion of magnetic properties is reported, even within monodisperse samples of the canonical ferromagnetic 3 d transition metals. We address this issue by investigating the magnetism of a large number of size- and shape-selected, individual nanoparticles of Fe, Co, and Ni using a unique set of complementary characterization techniques. At room temperature, only superparamagnetic behavior is observed in our experiments for all Ni nanoparticles within the investigated sizes, which range from 8 to 20 nm. However, Fe and Co nanoparticles can exist in two distinct magnetic states at any size in this range: (i) a superparamagnetic state, as expected from the bulk and surface anisotropies known for the respective materials and as observed for Ni, and (ii) a state with unexpected stable magnetization at room temperature. This striking state is assigned to significant modifications of the magnetic properties arising from metastable lattice defects in the core of the nanoparticles, as concluded by calculations and atomic structural characterization. Also related with the structural defects, we find that the magnetic state of Fe and Co nanoparticles can be tuned by thermal treatment enabling one to tailor their magnetic properties for applications. This paper demonstrates the importance of complementary single particle investigations for a better understanding of nanoparticle magnetism and for full exploration of their potential for applications.

  13. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  14. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  15. The structured ancestral selection graph and the many-demes limit.

    Science.gov (United States)

    Slade, Paul F; Wakeley, John

    2005-02-01

    We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.

  16. Effect of Group-Selection Opening Size on Breeding Bird Habitat Use in a Bottomland Forest

    Energy Technology Data Exchange (ETDEWEB)

    Moorman, C.E.; D.C. Guynn, Jr.

    2001-12-01

    Research on the effects of creating group-selection openings of various sizes on breeding birds habitat use in a bottomland hardwood forest of the Upper Coastal Plain of South Carolina. Creation of 0.5-ha group selection openings in southern bottomland forests should provide breeding habitat for some field-edge species in gaps and habitat for forest-interior species and canopy-dwelling forest-edge species between gaps provided that enough mature forest is made available.

  17. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  18. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  19. Specified assurance level sampling procedure

    International Nuclear Information System (INIS)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level

  20. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Science.gov (United States)

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately

  1. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  2. Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage.

    Science.gov (United States)

    Walsh, Matthew C; Trentham-Dietz, Amy; Gangnon, Ronald E; Nieto, F Javier; Newcomb, Polly A; Palta, Mari

    2012-06-01

    Increasing numbers of individuals are choosing to opt out of population-based sampling frames due to privacy concerns. This is especially a problem in the selection of controls for case-control studies, as the cases often arise from relatively complete population-based registries, whereas control selection requires a sampling frame. If opt out is also related to risk factors, bias can arise. We linked breast cancer cases who reported having a valid driver's license from the 2004-2008 Wisconsin women's health study (N = 2,988) with a master list of licensed drivers from the Wisconsin Department of Transportation (WDOT). This master list excludes Wisconsin drivers that requested their information not be sold by the state. Multivariate-adjusted selection probability ratios (SPR) were calculated to estimate potential bias when using this driver's license sampling frame to select controls. A total of 962 cases (32%) had opted out of the WDOT sampling frame. Cases age <40 (SPR = 0.90), income either unreported (SPR = 0.89) or greater than $50,000 (SPR = 0.94), lower parity (SPR = 0.96 per one-child decrease), and hormone use (SPR = 0.93) were significantly less likely to be covered by the WDOT sampling frame (α = 0.05 level). Our results indicate the potential for selection bias due to differential opt out between various demographic and behavioral subgroups of controls. As selection bias may differ by exposure and study base, the assessment of potential bias needs to be ongoing. SPRs can be used to predict the direction of bias when cases and controls stem from different sampling frames in population-based case-control studies.

  3. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  4. Probability Sampling - A Guideline for Quantitative Health Care ...

    African Journals Online (AJOL)

    A more direct definition is the method used for selecting a given ... description of the chosen population, the sampling procedure giving ... target population, precision, and stratification. The ... survey estimates, it is recommended that researchers first analyze a .... The optimum sample size has a relation to the type of planned ...

  5. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    Science.gov (United States)

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit

  6. Sample size reduction in groundwater surveys via sparse data assimilation

    KAUST Repository

    Hussain, Z.; Muhammad, A.

    2013-01-01

    In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.

  7. Sample size reduction in groundwater surveys via sparse data assimilation

    KAUST Repository

    Hussain, Z.

    2013-04-01

    In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.

  8. Sedimentation and the Economics of Selecting an Optimum Reservoir Size

    Science.gov (United States)

    Miltz, David; White, David C.

    1987-08-01

    This paper attempts to develop an easily reproducible methodology for the economic selection of an optimal reservoir size given an annual sedimentation rate. The optimal capacity is that at which the marginal cost of constructing additional storage capacity is equal to the dredging costs avoided by having that additional capacity available to store sediment. The cost implications of misestimating dredging costs, construction costs, and sediment delivery rates are investigated. In general, it is shown that oversizing is a rational response to uncertainty in the estimation of parameters. The sensitivity of the results to alternative discount rates is also discussed. The theoretical discussion is illustrated with a case study drawn from Highland Silver Lake in southwestern Illinois.

  9. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  10. Size-controlled sensitivity and selectivity for the fluorometric detection of Ag+ by homocysteine capped CdTe quantum dots

    International Nuclear Information System (INIS)

    Jiao, Hangzhou; Liang, Zhenhua; Peng, Guihua; Zhang, Ling; Lin, Hengwei

    2014-01-01

    We have synthesized water dispersible CdTe quantum dots (QDs) in different sizes and with various capping reagents, and have studied the effects of their size on the sensitivity and selectivity in the fluorometric determination of metal ions, particularly of silver(I). It is found that an increase in the particle size of homocysteine-capped CdTe QDs from 1.7 nm to 3.3 nm and to 3.7 nm enhances both the sensitivity and selectivity of the determination of Ag(I) to give an ultimate limit of detection as low as 8.3 nM. This effect can partially be explained by the better passivation of surface traps on smaller sized QDs via adsorption of Ag(I), thereby decreasing the apparent detection efficiency. In addition, the presence of CdS in the CdTe QDs is likely to play a role. The study demonstrates that an improvement in sensing performance is accomplished by using QDs of fine-tuned particle sizes. Such effects are likely also to occur with other QD-based optical probes. (author)

  11. Evidence of market-driven size-selective fishing and the mediating effects of biological and institutional factors

    Science.gov (United States)

    Reddy, Sheila M. W.; Wentz, Allison; Aburto-Oropeza, Octavio; Maxey, Martin; Nagavarapu, Sriniketh; Leslie, Heather M.

    2014-01-01

    Market demand is often ignored or assumed to lead uniformly to the decline of resources. Yet little is known about how market demand influences natural resources in particular contexts, or the mediating effects of biological or institutional factors. Here, we investigate this problem by examining the Pacific red snapper (Lutjanus peru) fishery around La Paz, Mexico, where medium or “plate-sized” fish are sold to restaurants at a premium price. If higher demand for plate-sized fish increases the relative abundance of the smallest (recruit size class) and largest (most fecund) fish, this may be a market mechanism to increase stocks and fishermen’s revenues. We tested this hypothesis by estimating the effect of prices on the distribution of catch across size classes using daily records of prices and catch. We linked predictions from this economic choice model to a staged-based model of the fishery to estimate the effects on the stock and revenues from harvest. We found that the supply of plate-sized fish increased by 6%, while the supply of large fish decreased by 4% as a result of a 13% price premium for plate-sized fish. This market-driven size selection increased revenues (14%) but decreased total fish biomass (−3%). However, when market-driven size selection was combined with limited institutional constraints, both fish biomass (28%) and fishermen’s revenue (22%) increased. These results show that the direction and magnitude of the effects of market demand on biological populations and human behavior can depend on both biological attributes and institutional constraints. Fisheries management may capitalize on these conditional effects by implementing size-based regulations when economic and institutional incentives will enhance compliance, as in the case we describe here, or by creating compliance enhancing conditions for existing regulations. PMID:23865225

  12. Adult health study reference papers. Selection of the sample. Characteristics of the sample

    Energy Technology Data Exchange (ETDEWEB)

    Beebe, G W; Fujisawa, Hideo; Yamasaki, Mitsuru

    1960-12-14

    The characteristics and selection of the clinical sample have been described in some detail to provide information on the comparability of the exposure groups with respect to factors excluded from the matching criteria and to provide basic descriptive information potentially relevant to individual studies that may be done within the framework of the Adult Health Study. The characteristics under review here are age, sex, many different aspects of residence, marital status, occupation and industry, details of location and shielding ATB, acute radiation signs and symptoms, and prior ABCC medical or pathology examinations. 5 references, 57 tables.

  13. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  14. Optimizing the grain size distribution for talc-magnesite ore flotation

    Directory of Open Access Journals (Sweden)

    Škvarla Jiøí

    2001-06-01

    Full Text Available Flotation is the only separation method with an universal utilization. Along with the separation of particulate valuable or hazardous components from primary and seconadry mineral raw materials, it is of usage in biotechnologies and water cleaning. The success of the flotation separation crucially depends on the particle size distribution or composition of the ore charge entering the process. The paper deals with the problem of flotation treatment of talc-magnesite ore. The main components of the ore, i.e. talc and magnesite are appreciably different in their grindability and floatability. For such a type of raw material, grinding of the charge plays a very important role in the process. The (unwanted influence of ultrafine particles on the course of the flotation process is well known. On the other hand, in order to liberate and subsequently to selectively separate both the components, a maximum particle size has to be respected.An influence of artificial samples of selected particle size fractions on the flotation efficiency has been studied experimentally by the quantitative evaluation of flotation products. The flotation experiments on the samples provided an information not obtainable from traditional flotation tests. An adverse effect of the size fraction 0 – 0.04 mm was revealed, decreasing the flotation selectivity appreciably. These results are of theoretical and practical importance.

  15. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  16. An efficient one-step condensation and activation strategy to synthesize porous carbons with optimal micropore sizes for highly selective CO₂ adsorption.

    Science.gov (United States)

    Wang, Jiacheng; Liu, Qian

    2014-04-21

    A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO₂ uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO₂ capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO₂ capture performance and can efficiently adsorb CO₂ molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO₂/N₂ adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO₂/N₂ adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO₂ adsorption in practical applications.

  17. ZResponse to selection, heritability and genetic correlations between body weight and body size in Pacific white shrimp, Litopenaeus vannamei

    Science.gov (United States)

    Andriantahina, Farafidy; Liu, Xiaolin; Huang, Hao; Xiang, Jianhai

    2012-03-01

    To quantify the response to selection, heritability and genetic correlations between weight and size of Litopenaeus vannamei, the body weight (BW), total length (TL), body length (BL), first abdominal segment depth (FASD), third abdominal segment depth (TASD), first abdominal segment width (FASW), and partial carapace length (PCL) of 5-month-old parents and of offspnng were measured by calculating seven body measunngs of offspnng produced by a nested mating design. Seventeen half-sib families and 42 full-sib families of L. vannamei were produced using artificial fertilization from 2-4 dams by each sire, and measured at around five months post-metamorphosis. The results show that hentabilities among vanous traits were high: 0.515±0.030 for body weight and 0.394±0.030 for total length. After one generation of selection. the selection response was 10.70% for offspring growth. In the 5th month, the realized heritability for weight was 0.296 for the offspnng generation. Genetic correlations between body weight and body size were highly variable. The results indicate that external morphological parameters can be applied dunng breeder selection for enhancing the growth without sacrificing animals for determining the body size and breed ability; and selective breeding can be improved significantly, simultaneously with increased production.

  18. Effect of directional selection for body size on fluctuating asymmetry in certain morphological traits in Drosophila ananassae.

    Science.gov (United States)

    Vishalakshi, C; Singh, B N

    2009-06-01

    Variation in the subtle differences between the right and left sides of bilateral characters or fluctuating asymmetry (FA) has been considered as an indicator of an organism's ability to cope with genetic and environmental stresses during development. However, due to inconsistency in the results of empirical studies, the relationship between FA and stress has been the subject of intense debate. In this study, we investigated whether stress caused by artificial bidirectional selection for body size has any effect on the levels of FA of different morphological traits in Drosophila ananassae. The realised heritability (h2) was higher in low-line females and high-line males, which suggests an asymmetrical response to selection for body size. Further, the levels of FA were compared across 10 generations of selection in different selection lines in both sexes for sternopleural bristle number, wing length, wing-to-thorax ratio, sex combtooth number and ovariole number. The levels of FA differed significantly among generations and selection lines but did not change markedly with directional selection. However, the levels of FA were higher in the G10 generation (at the end of selection) than G0 (at the start of selection) but lower than the G5 generation in different selection lines, suggesting that the levels of FA are not affected by the inbreeding generated during the course of selection. Also, the levels of FA in the hybrids of high and low lines were signifi cantly lower than the parental selection lines, suggesting that FA is influenced by hybridisation. These results are discussed in the framework of the literature available on FA and its relationship with stress.

  19. Association of occupation, employment contract, and company size with mental health in a national representative sample of employees in Japan.

    Science.gov (United States)

    Inoue, Akiomi; Kawakami, Norito; Tsuchiya, Masao; Sakurai, Keiko; Hashimoto, Hideki

    2010-01-01

    The purpose of this study was to investigate the cross-sectional association of employment contract, company size, and occupation with psychological distress using a nationally representative sample of the Japanese population. From June through July 2007, a total of 9,461 male and 7,717 female employees living in the community were randomly selected and surveyed using a self-administered questionnaire and interview including questions about occupational class variables, psychological distress (K6 scale), treatment for mental disorders, and other covariates. Among males, part-time workers had a significantly higher prevalence of psychological distress than permanent workers. Among females, temporary/contract workers had a significantly higher prevalence of psychological distress than permanent workers. Among males, those who worked at companies with 300-999 employees had a significantly higher prevalence of psychological distress than those who worked at the smallest companies (with 1-29 employees). Company size was not significantly associated with psychological distress among females. Additionally, occupation was not significantly associated with psychological distress among males or females. Similar patterns were observed when the analyses were conducted for those who had psychological distress and/or received treatment for mental disorders. Working as part-time workers, for males, and as temporary/contract workers, for females, may be associated with poor mental health in Japan. No clear gradient in mental health along company size or occupation was observed in Japan.

  20. Synthesis of nano-sized arsenic-imprinted polymer and its use as As3+ selective ionophore in a potentiometric membrane electrode: Part 1

    International Nuclear Information System (INIS)

    Alizadeh, Taher; Rashedi, Mariyam

    2014-01-01

    Highlights: • The first arsenic cation-selective membrane electrode was introduced. • A novel procedure was introduced for the preparation of As-imprinted polymer. • It was found that arsenic is recognized by the IIP as As 3+ species. • Nernstian response of 20.4 mV decade −1 and DL of 0.5 μM was obtained. - Abstract: In this study, a new strategy was proposed for the preparation of As (III)-imprinted polymer by using arsenic (methacrylate) 3 as template. Precipitation polymerization was utilized to synthesize nano-sized As (III)-imprinted polymer. Methacrylic acid and ethylene glycol dimethacrylate were used as the functional monomer and cross-linking agent, respectively. In order to assembly functional monomers around As (III) ion, sodium arsenite and methacrylic acid were heated in the presence of hydroquinone, leading to arsenic (methacrylate) 3 . The nano-sized As (III) selective polymer was characterized by FT-IR and scanning electron microscopy techniques (SEM). It was demonstrated that arsenic was recognized as As 3+ by the selective cavities of the synthesized IIP. Based on the prepared polymer, the first arsenic cation selective membrane electrode was introduced. Membrane electrode was constructed by dispersion of As (III)-imprinted polymer nanoparticles in poly(vinyl chloride), plasticized with di-nonylphthalate. The IIP-modified electrode exhibited a Nernstian response (20.4 ± 0.5 mV decade −1 ) to arsenic ion over a wide concentration range (7.0 × 10 −7 to 1.0 × 10 −1 mol L −1 ) with a lower detection limit of 5.0 × 10 −7 mol L −1 . Unlike this, the non-imprinted polymer (NIP)-based membrane electrode was not sensitive to arsenic in aqueous solution. The selectivity of the developed sensor to As (III) was shown to be satisfactory. The sensor was used for arsenic determination in some real samples

  1. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  2. The necessity of microscopy to characterize the optical properties of size-selected, nonspherical aerosol particles.

    Science.gov (United States)

    Veghte, Daniel P; Freedman, Miriam A

    2012-11-06

    It is currently unknown whether mineral dust causes a net warming or cooling effect on the climate system. This uncertainty stems from the varied and evolving shape and composition of mineral dust, which leads to diverse interactions of dust with solar and terrestrial radiation. To investigate these interactions, we have used a cavity ring-down spectrometer to study the optical properties of size-selected calcium carbonate particles, a reactive component of mineral dust. The size selection of nonspherical particles like mineral dust can differ from spherical particles in the polydispersity of the population selected. To calculate the expected extinction cross sections, we use Mie scattering theory for monodisperse spherical particles and for spherical particles with the polydispersity observed in transmission electron microscopy images. Our results for calcium carbonate are compared to the well-studied system of ammonium sulfate. While ammonium sulfate extinction cross sections agree with Mie scattering theory for monodisperse spherical particles, the results for calcium carbonate deviate at large and small particle sizes. We find good agreement for both systems, however, between the calculations performed using the particle images and the cavity ring-down data, indicating that both ammonium sulfate and calcium carbonate can be treated as polydisperse spherical particles. Our results indicate that having an independent measure of polydispersity is essential for understanding the optical properties of nonspherical particles measured with cavity ring-down spectroscopy. Our combined spectroscopy and microscopy techniques demonstrate a novel method by which cavity ring-down spectroscopy can be extended for the study of more complex aerosol particles.

  3. In search of genetic constraints limiting the evolution of egg size: direct and correlated responses to artificial selection on a prenatal maternal effector.

    Science.gov (United States)

    Pick, J L; Hutter, P; Tschirren, B

    2016-06-01

    Maternal effects are an important force in nature, but the evolutionary dynamics of the traits that cause them are not well understood. Egg size is known to be a key mediator of prenatal maternal effects with an established genetic basis. In contrast to theoretical expectations for fitness-related traits, there is a large amount of additive genetic variation in egg size observed in natural populations. One possible mechanism for the maintenance of this variation is through genetic constraints caused by a shared genetic basis among traits. Here we created replicated, divergent selection lines for maternal egg investment in Japanese quail (Coturnix japonica) to quantify the role of genetic constraints in the evolution of egg size. We found that egg size responds rapidly to selection, accompanied by a strong response in all egg components. Initially, we observed a correlated response in body size, but this response declined over time, showing that egg size and body size can evolve independently. Furthermore, no correlated response in fecundity (measured as the proportion of days on which a female laid an egg) was observed. However, the response to selection was asymmetrical, with egg size plateauing after one generation of selection in the high but not the low investment lines. We attribute this pattern to the presence of genetic asymmetries, caused by directional dominance or unequal allele frequencies. Such asymmetries may contribute to the evolutionary stasis in egg size observed in natural populations, despite a positive association between egg size and fitness.

  4. The nano-fractal structured tungsten oxides films with high thermal stability prepared by the deposition of size-selected W clusters

    Energy Technology Data Exchange (ETDEWEB)

    Park, Eun Ji; Kim, Young Dok [Sungkyunkwan University, Department of Chemistry, Suwon (Korea, Republic of); Dollinger, Andreas; Huether, Lukas; Blankenhorn, Moritz; Koehler, Kerstine; Gantefoer, Gerd [Konstanz University, Department of Physics, Constance (Germany); Seo, Hyun Ook [Sangmyung University, Department of Chemistry and Energy Engineering, Seoul (Korea, Republic of)

    2017-06-15

    Size-selected W{sub n}{sup -} clusters (n = 1650) were deposited on the highly ordered pyrolytic graphite surface at room temperature under high vacuum conditions by utilizing a magnetron sputtering source and a magnet sector field. Moreover, geometrical structure and surface chemical states of deposited clusters were analyzed by in situ scanning tunneling microscopy (STM) and X-ray photoelectron spectroscopy, respectively. The formation of 2-D islands (lateral size ∝150 nm) with multiple dendritic arms was observed by STM, and the structure of the individual W{sub 1650} clusters survived within the dendritic arms. To study the thermal stability of the nano-fractal structure under the atmospheric conditions, the sample was brought to the ambient air conditions and sequentially post-annealed at 200, 300, and 500 C in the air. The nano-fractal structure was maintained after the 1st post-annealing process at 200 C for 1 h in the air, and the subsequent 2nd post-annealing at 300 C (for 1 h, in the air) also did not induce any noticeable change in the topological structure of the sample. The topological changes were observed only after the further post-annealing at a higher temperature (at 500 C, 1 h) in the air. We show high potential use of these nano-structured films of tungsten oxides in ambient conditions. (orig.)

  5. Cyclohexane selective photocatalytic oxidation by anatase TiO2: influence of particle size and crystallinity

    NARCIS (Netherlands)

    Carneiro, J.T.; Carneiro, Joana T.; Almeida, A.R.; Almeida, Ana R.; Moulijn, Jacob A.; Mul, Guido

    2010-01-01

    A systematic study is presented on the effect of crystallite size of Anatase (Hombikat, Sachtleben), varied by calcination at different temperatures up to 800 °C, on photocatalytic activity in cyclohexane selective oxidation. Two different reactors were used to test the materials: a top illumination

  6. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

  7. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  8. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  9. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  10. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    International Nuclear Information System (INIS)

    Haugboel, Steven; Pinborg, Lars H.; Arfan, Haroon M.; Froekjaer, Vibe M.; Svarer, Claus; Knudsen, Gitte M.; Madsen, Jacob; Dyrby, Tim B.

    2007-01-01

    To determine the reproducibility of measurements of brain 5-HT 2A receptors with an [ 18 F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects reproducibility and the required sample size. For assessment of the variability, six subjects were investigated with [ 18 F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [ 18 F]altanserin PET studies in 84 healthy subjects. Regions of interest were automatically delineated on co-registered MR and PET images. In cortical brain regions with a high density of 5-HT 2A receptors, the outcome parameter (binding potential, BP 1 ) showed high reproducibility, with a median difference between the two group measurements of 6% (range 5-12%), whereas in regions with a low receptor density, BP 1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. This study demonstrates that [ 18 F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT 2A receptor density. Moreover, partial volume correction considerably reduces the sample size required to detect regional changes between groups. (orig.)

  11. Bell-shaped size selection in a bottom trawl: A case study for Nephrops directed fishery with reduced catches of cod

    DEFF Research Database (Denmark)

    Lövgren, Johan; Herrmann, Bent; Feekings, Jordan P.

    2016-01-01

    and size selectivity have motivated the development of selective systems in trawl fisheries that utilize more than one selective device simultaneously. An example can be found in the Swedish demersal trawl fishery targeting Norway lobster (Nephrops norvegicus), which simultaneously aims at avoiding catches...

  12. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  13. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  14. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  15. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    International Nuclear Information System (INIS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-01-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers

  16. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  17. Is portion size selection associated with expected satiation, perceived healthfulness or expected tastiness? A case study on pizza using a photograph-based computer task.

    Science.gov (United States)

    Labbe, D; Rytz, A; Godinot, N; Ferrage, A; Martin, N

    2017-01-01

    Increasing portion sizes over the last 30 years are considered to be one of the factors underlying overconsumption. Past research on the drivers of portion selection for foods showed that larger portions are selected for foods delivering low expected satiation. However, the respective contribution of expected satiation vs. two other potential drivers of portion size selection, i.e. perceived healthfulness and expected tastiness, has never been explored. In this study, we conjointly explored the role of expected satiation, perceived healthfulness and expected tastiness when selecting portions within a range of six commercial pizzas varying in their toppings and brands. For each product, 63 pizza consumers selected a portion size that would satisfy them for lunch and scored their expected satiation, perceived healthfulness and expected tastiness. As six participants selected an entire pizza as ideal portion independently of topping or brand, their data sets were not considered in the data analyses completed on responses from 57 participants. Hierarchical multiple regression analyses showed that portion size variance was predicted by perceived healthiness and expected tastiness variables. Two sub-groups of participants with different portion size patterns across pizzas were identified through post-hoc exploratory analysis. The explanatory power of the regression model was significantly improved by adding interaction terms between sub-group and expected satiation variables and between sub-group and perceived healthfulness variables to the model. Analysis at a sub-group level showed either positive or negative association between portion size and expected satiation depending on sub-groups. For one group, portion size selection was more health-driven and for the other, more hedonic-driven. These results showed that even when considering a well-liked product category, perceived healthfulness can be an important factor influencing portion size decision. Copyright © 2016

  18. Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions

    International Nuclear Information System (INIS)

    John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.

    2000-01-01

    Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was

  19. Gender Wage Gap : A Semi-Parametric Approach With Sample Selection Correction

    NARCIS (Netherlands)

    Picchio, M.; Mussida, C.

    2010-01-01

    Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates

  20. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.