WorldWideScience

Sample records for sample sizes ranging

  1. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  2. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  3. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  4. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  5. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  6. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  7. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  8. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  9. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  10. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  11. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  12. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  13. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  14. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  15. Are range-size distributions consistent with species-level heritability?

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe; Gotelli, Nicholas; Rahbek, Carsten

    2012-01-01

    The concept of species-level heritability is widely contested. Because it is most likely to apply to emergent, species-level traits, one of the central discussions has focused on the potential heritability of geographic range size. However, a central argument against range-size heritability has...... been that it is not compatible with the observed shape of present-day species range-size distributions (SRDs), a claim that has never been tested. To assess this claim, we used forward simulation of range-size evolution in clades with varying degrees of range-size heritability, and compared the output...

  16. Free-ranging farm cats: home range size and predation on a livestock unit in Northwest Georgia.

    Science.gov (United States)

    Kitts-Morgan, Susanna E; Caires, Kyle C; Bohannon, Lisa A; Parsons, Elizabeth I; Hilburn, Katharine A

    2015-01-01

    This study's objective was to determine seasonal and diurnal vs. nocturnal home range size, as well as predation for free-ranging farm cats at a livestock unit in Northwest Georgia. Seven adult cats were tracked with attached GPS units for up to two weeks for one spring and two summer seasons from May 2010 through August 2011. Three and five cats were tracked for up to two weeks during the fall and winter seasons, respectively. Feline scat was collected during this entire period. Cats were fed a commercial cat food daily. There was no seasonal effect (P > 0.05) on overall (95% KDE and 90% KDE) or core home range size (50% KDE). Male cats tended (P = 0.08) to have larger diurnal and nocturnal core home ranges (1.09 ha) compared to female cats (0.64 ha). Reproductively intact cats (n = 2) had larger (P ranges as compared to altered cats. Feline scat processing separated scat into prey parts, and of the 210 feline scats collected during the study, 75.24% contained hair. Of these 158 scat samples, 86 contained non-cat hair and 72 contained only cat hair. Other prey components included fragments of bone in 21.43% of scat and teeth in 12.86% of scat. Teeth were used to identify mammalian prey hunted by these cats, of which the Hispid cotton rat (Sigmodon hispidus) was the primary rodent. Other targeted mammals were Peromyscus sp., Sylvilagus sp. and Microtus sp. Invertebrates and birds were less important as prey, but all mammalian prey identified in this study consisted of native animals. While the free-ranging farm cats in this study did not adjust their home range seasonally, sex and reproductive status did increase diurnal and nocturnal home range size. Ultimately, larger home ranges of free-ranging cats could negatively impact native wildlife.

  17. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  18. Free-Ranging Farm Cats: Home Range Size and Predation on a Livestock Unit In Northwest Georgia

    Science.gov (United States)

    Kitts-Morgan, Susanna E.; Caires, Kyle C.; Bohannon, Lisa A.; Parsons, Elizabeth I.; Hilburn, Katharine A.

    2015-01-01

    This study’s objective was to determine seasonal and diurnal vs. nocturnal home range size, as well as predation for free-ranging farm cats at a livestock unit in Northwest Georgia. Seven adult cats were tracked with attached GPS units for up to two weeks for one spring and two summer seasons from May 2010 through August 2011. Three and five cats were tracked for up to two weeks during the fall and winter seasons, respectively. Feline scat was collected during this entire period. Cats were fed a commercial cat food daily. There was no seasonal effect (P > 0.05) on overall (95% KDE and 90% KDE) or core home range size (50% KDE). Male cats tended (P = 0.08) to have larger diurnal and nocturnal core home ranges (1.09 ha) compared to female cats (0.64 ha). Reproductively intact cats (n = 2) had larger (P cats. Feline scat processing separated scat into prey parts, and of the 210 feline scats collected during the study, 75.24% contained hair. Of these 158 scat samples, 86 contained non-cat hair and 72 contained only cat hair. Other prey components included fragments of bone in 21.43% of scat and teeth in 12.86% of scat. Teeth were used to identify mammalian prey hunted by these cats, of which the Hispid cotton rat (Sigmodon hispidus) was the primary rodent. Other targeted mammals were Peromyscus sp., Sylvilagus sp. and Microtus sp. Invertebrates and birds were less important as prey, but all mammalian prey identified in this study consisted of native animals. While the free-ranging farm cats in this study did not adjust their home range seasonally, sex and reproductive status did increase diurnal and nocturnal home range size. Ultimately, larger home ranges of free-ranging cats could negatively impact native wildlife. PMID:25894078

  19. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  20. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  1. Thermal barriers constrain microbial elevational range size via climate variability.

    Science.gov (United States)

    Wang, Jianjun; Soininen, Janne

    2017-08-01

    Range size is invariably limited and understanding range size variation is an important objective in ecology. However, microbial range size across geographical gradients remains understudied, especially on mountainsides. Here, the patterns of range size of stream microbes (i.e., bacteria and diatoms) and macroorganisms (i.e., macroinvertebrates) along elevational gradients in Asia and Europe were examined. In bacteria, elevational range size showed non-significant phylogenetic signals. In all taxa, there was a positive relationship between niche breadth and species elevational range size, driven by local environmental and climatic variables. No taxa followed the elevational Rapoport's rule. Climate variability explained the most variation in microbial mean elevational range size, whereas local environmental variables were more important for macroinvertebrates. Seasonal and annual climate variation showed negative effects, while daily climate variation had positive effects on community mean elevational range size for all taxa. The negative correlation between range size and species richness suggests that understanding the drivers of range is key for revealing the processes underlying diversity. The results advance the understanding of microbial species thermal barriers by revealing the importance of seasonal and diurnal climate variation, and highlight that aquatic and terrestrial biota may differ in their response to short- and long-term climate variability. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  2. Free-ranging farm cats: home range size and predation on a livestock unit in Northwest Georgia.

    Directory of Open Access Journals (Sweden)

    Susanna E Kitts-Morgan

    Full Text Available This study's objective was to determine seasonal and diurnal vs. nocturnal home range size, as well as predation for free-ranging farm cats at a livestock unit in Northwest Georgia. Seven adult cats were tracked with attached GPS units for up to two weeks for one spring and two summer seasons from May 2010 through August 2011. Three and five cats were tracked for up to two weeks during the fall and winter seasons, respectively. Feline scat was collected during this entire period. Cats were fed a commercial cat food daily. There was no seasonal effect (P > 0.05 on overall (95% KDE and 90% KDE or core home range size (50% KDE. Male cats tended (P = 0.08 to have larger diurnal and nocturnal core home ranges (1.09 ha compared to female cats (0.64 ha. Reproductively intact cats (n = 2 had larger (P < 0.0001 diurnal and nocturnal home ranges as compared to altered cats. Feline scat processing separated scat into prey parts, and of the 210 feline scats collected during the study, 75.24% contained hair. Of these 158 scat samples, 86 contained non-cat hair and 72 contained only cat hair. Other prey components included fragments of bone in 21.43% of scat and teeth in 12.86% of scat. Teeth were used to identify mammalian prey hunted by these cats, of which the Hispid cotton rat (Sigmodon hispidus was the primary rodent. Other targeted mammals were Peromyscus sp., Sylvilagus sp. and Microtus sp. Invertebrates and birds were less important as prey, but all mammalian prey identified in this study consisted of native animals. While the free-ranging farm cats in this study did not adjust their home range seasonally, sex and reproductive status did increase diurnal and nocturnal home range size. Ultimately, larger home ranges of free-ranging cats could negatively impact native wildlife.

  3. Interspecific geographic range size-body size relationship and the diversification dynamics of Neotropical furnariid birds.

    Science.gov (United States)

    Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M

    2018-05-01

    Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  4. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  5. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  6. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  7. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  8. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  9. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  10. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  11. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  12. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  13. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  14. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  15. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  17. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  19. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  20. Hysteresis losses of magnetic nanoparticle powders in the single domain size range

    International Nuclear Information System (INIS)

    Dutz, S.; Hergt, R.; Muerbe, J.; Mueller, R.; Zeisberger, M.; Andrae, W.; Toepfer, J.; Bellemann, M.E.

    2007-01-01

    Magnetic iron oxide nanoparticle powders were investigated in order to optimise the specific hysteresis losses for biomedical heating applications. Different samples with a mean particle size in the transition range from superparamagnetic to ferromagnetic behaviour (i.e. 10-100 nm) were prepared by two different chemical precipitation routes. Additionally, the influence of milling and annealing on hysteresis losses of the nanoparticles was investigated. Structural investigations of the samples were carried out by X-ray diffraction, measurement of specific surface area, and scanning and transmission electron microscopy. The dependence of hysteresis losses of minor loops on the field amplitude was determined using vibrating sample magnetometry and caloric measurements. For small field amplitudes, a power law was found which changes into saturation at amplitudes well above the coercive field. Maximum hysteresis losses of 6.6 J/kg per cycle were observed for milled powder. For field amplitudes below about 10 kA/m, which are especially interesting for medical and technical applications, hysteresis losses of all investigated powders were at least by one order of magnitude lower than reported for magnetosomes of comparable size

  1. Home Range Size and Resource Use of Breeding and Non-breeding White Storks Along a Land Use Gradient

    Directory of Open Access Journals (Sweden)

    Damaris Zurell

    2018-06-01

    Full Text Available Biotelemetry is increasingly used to study animal movement at high spatial and temporal resolution and guide conservation and resource management. Yet, limited sample sizes and variation in space and habitat use across regions and life stages may compromise robustness of behavioral analyses and subsequent conservation plans. Here, we assessed variation in (i home range sizes, (ii home range selection, and (iii fine-scale resource selection of white storks across breeding status and regions and test model transferability. Three study areas were chosen within the Central German breeding grounds ranging from agricultural to fluvial and marshland. We monitored GPS-locations of 62 adult white storks equipped with solar-charged GPS/3D-acceleration (ACC transmitters in 2013–2014. Home range sizes were estimated using minimum convex polygons. Generalized linear mixed models were used to assess home range selection and fine-scale resource selection by relating the home ranges and foraging sites to Corine habitat variables and normalized difference vegetation index in a presence/pseudo-absence design. We found strong variation in home range sizes across breeding stages with significantly larger home ranges in non-breeding compared to breeding white storks, but no variation between regions. Home range selection models had high explanatory power and well predicted overall density of Central German white stork breeding pairs. Also, they showed good transferability across regions and breeding status although variable importance varied considerably. Fine-scale resource selection models showed low explanatory power. Resource preferences differed both across breeding status and across regions, and model transferability was poor. Our results indicate that habitat selection of wild animals may vary considerably within and between populations, and is highly scale dependent. Thereby, home range scale analyses show higher robustness whereas fine-scale resource

  2. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  4. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  5. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  6. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  7. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  8. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  9. Diversification Rates and the Evolution of Species Range Size Frequency Distribution

    Directory of Open Access Journals (Sweden)

    Silvia Castiglione

    2017-11-01

    Full Text Available The geographic range sizes frequency distribution (RFD within clades is typically right-skewed with untransformed data, and bell-shaped or slightly left-skewed under the log-transformation. This means that most species within clades occupy diminutive ranges, whereas just a few species are truly widespread. A number of ecological and evolutionary explanations have been proposed to account for this pattern. Among the latter, much attention has been given to the issue of how extinction and speciation probabilities influence RFD. Numerous accounts now convincingly demonstrate that extinction rate decreases with range size, both in living and extinct taxa. The relationship between range size and speciation rate, though, is much less obvious, with either small or large ranged species being proposed to originate more daughter taxa. Herein, we used a large fossil database including 21 animal clades and more than 80,000 fossil occurrences distributed over more than 400 million years of marine metazoans (exclusive of vertebrates evolution, to test the relationship between extinction rate, speciation rate, and range size. As expected, we found that extinction rate almost linearly decreases with range size. In contrast, speciation rate peaks at the large (but not the largest end of the range size spectrum. This is consistent with the peripheral isolation mode of allopatric speciation being the main mechanism of species origination. The huge variation in phylogeny, fossilization potential, time of fossilization, and the overarching effect of mass extinctions suggest caution must be posed at generalizing our results, as individual clades may deviate significantly from the general pattern.

  10. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  11. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  12. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  13. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  14. A Macrophysiological Analysis of Energetic Constraints on Geographic Range Size in Mammals

    Science.gov (United States)

    Ceballos, Gerardo; Steele, Michael A.

    2013-01-01

    Physiological processes are essential for understanding the distribution and abundance of organisms, and recently, with widespread attention to climate change, physiology has been ushered back to the forefront of ecological thinking. We present a macrophysiological analysis of the energetics of geographic range size using combined data on body size, basal metabolic rate (BMR), phylogeny and range properties for 574 species of mammals. We propose three mechanisms by which interspecific variation in BMR should relate positively to geographic range size: (i) Thermal Plasticity Hypothesis, (ii) Activity Levels/Dispersal Hypothesis, and (iii) Energy Constraint Hypothesis. Although each mechanism predicts a positive correlation between BMR and range size, they can be further distinguished based on the shape of the relationship they predict. We found evidence for the predicted positive relationship in two dimensions of energetics: (i) the absolute, mass-dependent dimension (BMR) and (ii) the relative, mass-independent dimension (MIBMR). The shapes of both relationships were similar and most consistent with that expected from the Energy Constraint Hypothesis, which was proposed previously to explain the classic macroecological relationship between range size and body size in mammals and birds. The fact that this pattern holds in the MIBMR dimension indicates that species with supra-allometric metabolic rates require among the largest ranges, above and beyond the increasing energy demands that accrue as an allometric consequence of large body size. The relationship is most evident at high latitudes north of the Tropics, where large ranges and elevated MIBMR are most common. Our results suggest that species that are most vulnerable to extinction from range size reductions are both large-bodied and have elevated MIBMR, but also, that smaller species with elevated MIBMR are at heightened risk. We also provide insights into the global latitudinal trends in range size and MIBMR

  15. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  16. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  17. Nonbreeding home‐range size and survival of lesser prairie‐chickens

    Science.gov (United States)

    Robinson, Samantha G.; Haukos, David A.; Plumb, Reid T.; Lautenbach, Joseph M.; Sullins, Daniel S.; Kraft, John D.; Lautenbach, Jonathan D.; Hagen, Christian A.; Pitman, James C.

    2018-01-01

    The lesser prairie‐chicken (Tympanuchus pallidicinctus), a species of conservation concern with uncertain regulatory status, has experienced population declines over the past century. Most research on lesser prairie‐chickens has focused on the breeding season, with little research conducted during the nonbreeding season, a period that exerts a strong influence on demography in other upland game birds. We trapped lesser prairie‐chickens on leks and marked them with either global positioning system (GPS) satellite or very high frequency (VHF) transmitters to estimate survival and home‐range size during the nonbreeding season. We monitored 119 marked lesser prairie‐chickens in 3 study areas in Kansas, USA, from 16 September to 14 March in 2013, 2014, and 2015. We estimated home‐range size using Brownian Bridge movement models (GPS transmitters) and fixed kernel density estimators (VHF transmitters), and female survival using Kaplan–Meier known‐fate models. Average home‐range size did not differ between sexes. Estimated home‐range size was 3 times greater for individuals fitted with GPS satellite transmitters ( = 997 ha) than those with VHF transmitters ( = 286 ha), likely a result of the temporal resolution of the different transmitters. Home‐range size of GPS‐marked birds increased 2.8 times relative to the breeding season and varied by study area and year. Home‐range size was smaller in the 2013–2014 nonbreeding season ( = 495 ha) than the following 2 nonbreeding seasons ( = 1,290 ha and  = 1,158 ha), corresponding with drought conditions of 2013, which were alleviated in following years. Female survival () was high relative to breeding season estimates, and did not differ by study area or year ( = 0.73 ± 0.04 [SE]). Future management could remain focused on the breeding season because nonbreeding survival was 39–44% greater than the previous breeding season; however, considerations of total space

  18. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  19. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  20. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  1. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  2. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  3. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  4. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  6. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  7. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  8. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  9. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  10. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    Science.gov (United States)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM

  11. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  12. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively

  13. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  14. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  15. Causes and consequences of range size variation: the influence of traits, speciation, and extinction

    Directory of Open Access Journals (Sweden)

    Steven M. Vamosi

    2012-12-01

    Full Text Available The tremendous variation in species richness observed among related clades across the tree of life has long caught the imagination of biologists. Recently, there has been growing attention paid to the possible contribution of range size variation, either alone or in combination with putative key innovations, to these patterns. Here, we review three related topics relevant to range size evolution, speciation, and extinction. First, we provide a brief overview of the debate surrounding patterns and mechanisms for phylogenetic signal in range size. Second, we discuss some recent findings regarding the joint influence of traits and range size on diversification. Finally, we present the preliminary results of a study investigating whether range size is negatively correlated with contemporary extinction risk in flowering plants.

  16. Home range sizes for burchell's zebra equus burchelli antiquorum from the Kruger National Park

    Directory of Open Access Journals (Sweden)

    G.L. Smuts

    1975-07-01

    Full Text Available Annual home range sizes were determined for 49 marked zebra family groups in the Kruger National Park. Sizes varied from 49 to 566 sq. km, the mean for the Park being 164 square kilometre. Mean home range sizes for different zebra sub-populations and biotic areas were found to differ considerably. Present herbivore densities have not influenced intra- and inter-specific tolerance levels to the extent that home range sizes have increased. Local habitat conditions, and particularly seasonal vegetational changes, were found to have the most profound influence on the shape and mean size of home ranges. The large home range sizes obtained in the Kruger Park, when compared to an area such as the Ngorongoro Crater, can be ascribed to a lower carrying capacity with respect to zebra, large portions of the habitat being sub-optimal, either seasonally or annually.

  17. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  18. Linking seasonal home range size with habitat selection and movement in a mountain ungulate.

    Science.gov (United States)

    Viana, Duarte S; Granados, José Enrique; Fandos, Paulino; Pérez, Jesús M; Cano-Manuel, Francisco Javier; Burón, Daniel; Fandos, Guillermo; Aguado, María Ángeles Párraga; Figuerola, Jordi; Soriguer, Ramón C

    2018-01-01

    Space use by animals is determined by the interplay between movement and the environment, and is thus mediated by habitat selection, biotic interactions and intrinsic factors of moving individuals. These processes ultimately determine home range size, but their relative contributions and dynamic nature remain less explored. We investigated the role of habitat selection, movement unrelated to habitat selection and intrinsic factors related to sex in driving space use and home range size in Iberian ibex, Capra pyrenaica . We used GPS collars to track ibex across the year in two different geographical areas of Sierra Nevada, Spain, and measured habitat variables related to forage and roost availability. By using integrated step selection analysis (iSSA), we show that habitat selection was important to explain space use by ibex. As a consequence, movement was constrained by habitat selection, as observed displacement rate was shorter than expected under null selection. Selection-independent movement, selection strength and resource availability were important drivers of seasonal home range size. Both displacement rate and directional persistence had a positive relationship with home range size while accounting for habitat selection, suggesting that individual characteristics and state may also affect home range size. Ibex living at higher altitudes, where resource availability shows stronger altitudinal gradients across the year, had larger home ranges. Home range size was larger in spring and autumn, when ibex ascend and descend back, and smaller in summer and winter, when resources are more stable. Therefore, home range size decreased with resource availability. Finally, males had larger home ranges than females, which might be explained by differences in body size and reproductive behaviour. Movement, selection strength, resource availability and intrinsic factors related to sex determined home range size of Iberian ibex. Our results highlight the need to integrate

  19. Mean latitudinal range sizes of bird assemblages in six Neotropical forest chronosequences

    DEFF Research Database (Denmark)

    Dunn, Robert R.; Romdal, Tom Skovlund

    2005-01-01

    Aim The geographical range size frequency distributions of animal and plant assemblages are among the most important factors affecting large-scale patterns of diversity. Nonetheless, the relationship between habitat type and the range size distributions of species forming assemblages remains poorly...... towards more small ranged species occurs. Even relatively old secondary forests have bird species with larger average ranges than mature forests. As a consequence, conservation of secondary forests alone will miss many of the species most at risk of extinction and most unlikely to be conserved in other...

  20. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  2. Range size heritability and diversification patterns in the liverwort genus Radula.

    Science.gov (United States)

    Patiño, Jairo; Wang, Jian; Renner, Matt A M; Gradstein, S Robbert; Laenen, Benjamin; Devos, Nicolas; Shaw, A Jonathan; Vanderpoorten, Alain

    2017-01-01

    Why some species exhibit larger geographical ranges than others, and to what extent does variation in range size affect diversification rates, remains a fundamental, but largely unanswered question in ecology and evolution. Here, we implement phylogenetic comparative analyses and ancestral area estimations in Radula, a liverwort genus of Cretaceous origin, to investigate the mechanisms that explain differences in geographical range size and diversification rates among lineages. Range size was phylogenetically constrained in the two sub-genera characterized by their almost complete Australasian and Neotropical endemicity, respectively. The congruence between the divergence time of these lineages and continental split suggests that plate tectonics could have played a major role in their present distribution, suggesting that a strong imprint of vicariance can still be found in extant distribution patterns in these highly mobile organisms. Amentuloradula, Volutoradula and Metaradula species did not appear to exhibit losses of dispersal capacities in terms of dispersal life-history traits, but evidence for significant phylogenetic signal in macroecological niche traits suggests that niche conservatism accounts for their restricted geographic ranges. Despite their greatly restricted distribution to Australasia and Neotropics respectively, Amentuloradula and Volutoradula did not exhibit significantly lower diversification rates than more widespread lineages, in contrast with the hypothesis that the probability of speciation increases with range size by promoting geographic isolation and increasing the rate at which novel habitats are encountered. We suggest that stochastic long-distance dispersal events may balance allele frequencies across large spatial scales, leading to low genetic structure among geographically distant areas or even continents, ultimately decreasing the diversification rates in highly mobile, widespread lineages. Copyright © 2016 Elsevier Inc. All

  3. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  4. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  5. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  6. Abundance-range size relationships in stream vegetation in Denmark

    DEFF Research Database (Denmark)

    Riis, Tenna; Sand-Jensen, Kaj

    2002-01-01

    thecultivated lowlands of Denmark, we examined the overall relationship betweenlocal abundance and geographical range size of the vascular flora. We found asignificant positive relationship for all species at all stream localities andan even stronger relationship for ecologically similar species...

  7. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  8. Improving small RNA-seq by using a synthetic spike-in set for size-range quality control together with a set for data normalization.

    Science.gov (United States)

    Locati, Mauro D; Terpstra, Inez; de Leeuw, Wim C; Kuzak, Mateusz; Rauwerda, Han; Ensink, Wim A; van Leeuwen, Selina; Nehrdich, Ulrike; Spaink, Herman P; Jonker, Martijs J; Breit, Timo M; Dekker, Rob J

    2015-08-18

    There is an increasing interest in complementing RNA-seq experiments with small-RNA (sRNA) expression data to obtain a comprehensive view of a transcriptome. Currently, two main experimental challenges concerning sRNA-seq exist: how to check the size distribution of isolated sRNAs, given the sensitive size-selection steps in the protocol; and how to normalize data between samples, given the low complexity of sRNA types. We here present two separate sets of synthetic RNA spike-ins for monitoring size-selection and for performing data normalization in sRNA-seq. The size-range quality control (SRQC) spike-in set, consisting of 11 oligoribonucleotides (10-70 nucleotides), was tested by intentionally altering the size-selection protocol and verified via several comparative experiments. We demonstrate that the SRQC set is useful to reproducibly track down biases in the size-selection in sRNA-seq. The external reference for data-normalization (ERDN) spike-in set, consisting of 19 oligoribonucleotides, was developed for sample-to-sample normalization in differential-expression analysis of sRNA-seq data. Testing and applying the ERDN set showed that it can reproducibly detect differential expression over a dynamic range of 2(18). Hence, biological variation in sRNA composition and content between samples is preserved while technical variation is effectively minimized. Together, both spike-in sets can significantly improve the technical reproducibility of sRNA-seq. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  10. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  11. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    International Nuclear Information System (INIS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-01-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers

  12. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  13. Habitat area and climate stability determine geographical variation in plant species range sizes

    DEFF Research Database (Denmark)

    Morueta-Holme, Naia; Enquist, Brian J.; McGill, Brian J.

    2013-01-01

    Despite being a fundamental aspect of biodiversity, little is known about what controls species range sizes. This is especially the case for hyperdiverse organisms such as plants. We use the largest botanical data set assembled to date to quantify geographical variation in range size for ~85,000 ...

  14. Effects of GPS sampling intensity on home range analyses

    Science.gov (United States)

    Jeffrey J. Kolodzinski; Lawrence V. Tannenbaum; David A. Osborn; Mark C. Conner; W. Mark Ford; Karl V. Miller

    2010-01-01

    The two most common methods for determining home ranges, minimum convex polygon (MCP) and kernel analyses, can be affected by sampling intensity. Despite prior research, it remains unclear how high-intensity sampling regimes affect home range estimations. We used datasets from 14 GPS-collared, white-tailed deer (Odocoileus virginianus) to describe...

  15. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    International Nuclear Information System (INIS)

    Haugboel, Steven; Pinborg, Lars H.; Arfan, Haroon M.; Froekjaer, Vibe M.; Svarer, Claus; Knudsen, Gitte M.; Madsen, Jacob; Dyrby, Tim B.

    2007-01-01

    To determine the reproducibility of measurements of brain 5-HT 2A receptors with an [ 18 F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects reproducibility and the required sample size. For assessment of the variability, six subjects were investigated with [ 18 F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [ 18 F]altanserin PET studies in 84 healthy subjects. Regions of interest were automatically delineated on co-registered MR and PET images. In cortical brain regions with a high density of 5-HT 2A receptors, the outcome parameter (binding potential, BP 1 ) showed high reproducibility, with a median difference between the two group measurements of 6% (range 5-12%), whereas in regions with a low receptor density, BP 1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. This study demonstrates that [ 18 F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT 2A receptor density. Moreover, partial volume correction considerably reduces the sample size required to detect regional changes between groups. (orig.)

  16. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  18. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  19. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  20. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  1. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  2. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Lead particle size and its association with firing conditions and range maintenance: implications for treatment.

    Science.gov (United States)

    Dermatas, Dimitris; Chrysochoou, Maria

    2007-08-01

    Six firing range soils were analyzed, representing different environments, firing conditions, and maintenance practices. The particle size distribution and lead (Pb) concentration in each soil fraction were determined for samples obtained from the backstop berms. The main factors that were found to influence Pb fragment size were the type of soil used to construct the berms and the type of weapon fired. The firing of high velocity weapons, i.e., rifles, onto highly angular soils induced significant fragmentation of the bullets and/or pulverization of the soil itself. This resulted in the accumulation of Pb in the finer soil fractions and the spread of Pb contamination beyond the vicinity of the backstop berm. Conversely, the use of clay as backstop and the use of low velocity pistols proved to be favorable for soil clean-up and range maintenance, since Pb was mainly present as large metallic fragments that can be recovered by a simple screening process. Other factors that played important roles in Pb particle size distribution were soil chemistry, firing distance, and maintenance practices, such as the use of water spray for dust suppression and deflectors prior to impact. Overall, coarse Pb particles provide much easier and more cost-effective maintenance, soil clean-up, and remediation via physical separation. Fine Pb particles release Pb more easily, pose an airborne Pb hazard, and require the application of stabilization/solidification treatment methods. Thus, to ensure sustainable firing range operations by means of cost-effective design, maintenance, and clean-up, especially when high velocity weapons are used, the above mentioned factors should be carefully considered.

  4. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  6. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  7. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  8. Linking home ranges to protected area size: The case study of the Mediterranean Sea

    DEFF Research Database (Denmark)

    Di Franco, Antonio; Plass-Johnson, Jeremiah Grahm; Di Lorenzo, Manfredi

    2018-01-01

    in the Mediterranean Sea, and related this to the size of 184 Mediterranean fully protected areas. We also investigated the influence of fully protected areas size on fish density in contrast to fished areas with respect to home ranges. Home range estimations were available for 11 species (10 fishes and 1 lobster......). The European spiny lobster Palinurus elephas had the smallest home range (0.0039 ± 0.0014 km2; mean ± 1 SE), while the painted comber Serranus scriba (1.1075 ± 0.2040 km2) had the largest. Approximately 25% of Mediterranean fully protected areas are larger than 2 times the size of the largest home range...

  9. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  10. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  11. Space use of wintering waterbirds in India: Influence of trophic ecology on home-range size

    Science.gov (United States)

    Namgail, Tsewang; Takekawa, John Y.; Balachandran, Sivananinthaperumal; Sathiyaselvam, Ponnusamy; Mundkur, Taej; Newman, Scott H.

    2014-01-01

    Relationship between species' home range and their other biological traits remains poorly understood, especially in migratory birds due to the difficulty associated with tracking them. Advances in satellite telemetry and remote sensing techniques have proved instrumental in overcoming such challenges. We studied the space use of migratory ducks through satellite telemetry with an objective of understanding the influence of body mass and feeding habits on their home-range sizes. We marked 26 individuals, representing five species of migratory ducks, with satellite transmitters during two consecutive winters in three Indian states. We used kernel methods to estimate home ranges and core use areas of these waterfowl, and assessed the influence of body mass and feeding habits on home-range size. Feeding habits influenced the home-range size of the migratory ducks. Carnivorous ducks had the largest home ranges, herbivorous ducks the smallest, while omnivorous species had intermediate home-ranges. Body mass did not explain variation in home-range size. To our knowledge, this is the first study of its kind on migratory ducks, and it has important implications for their conservation and management.

  12. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  13. Non-uniform sampling and wide range angular spectrum method

    International Nuclear Information System (INIS)

    Kim, Yong-Hae; Byun, Chun-Won; Oh, Himchan; Lee, JaeWon; Pi, Jae-Eun; Heon Kim, Gi; Lee, Myung-Lae; Ryu, Hojun; Chu, Hye-Yong; Hwang, Chi-Sun

    2014-01-01

    A novel method is proposed for simulating free space field propagation from a source plane to a destination plane that is applicable for both small and large propagation distances. The angular spectrum method (ASM) was widely used for simulating near field propagation, but it caused a numerical error when the propagation distance was large because of aliasing due to under sampling. Band limited ASM satisfied the Nyquist condition on sampling by limiting a bandwidth of a propagation field to avoid an aliasing error so that it could extend the applicable propagation distance of the ASM. However, the band limited ASM also made an error due to the decrease of an effective sampling number in a Fourier space when the propagation distance was large. In the proposed wide range ASM, we use a non-uniform sampling in a Fourier space to keep a constant effective sampling number even though the propagation distance is large. As a result, the wide range ASM can produce simulation results with high accuracy for both far and near field propagation. For non-paraxial wave propagation, we applied the wide range ASM to a shifted destination plane as well. (paper)

  14. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  15. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  16. The effect of short-range spatial variability on soil sampling uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Perk, Marcel van der [Department of Physical Geography, Utrecht University, P.O. Box 80115, 3508 TC Utrecht (Netherlands)], E-mail: m.vanderperk@geo.uu.nl; De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Laboratori, Misure ed Attivita di Campo, Via di Castel Romano, 100-00128 Roma (Italy); Fajgelj, Ales; Sansone, Umberto [International Atomic Energy Agency (IAEA), Agency' s Laboratories Seibersdorf, A-1400 Vienna (Austria); Jeran, Zvonka; Jacimovic, Radojko [Jozef Stefan Institute, Jamova 39, 1000 Ljubljana (Slovenia)

    2008-11-15

    This paper aims to quantify the soil sampling uncertainty arising from the short-range spatial variability of elemental concentrations in the topsoils of agricultural, semi-natural, and contaminated environments. For the agricultural site, the relative standard sampling uncertainty ranges between 1% and 5.5%. For the semi-natural area, the sampling uncertainties are 2-4 times larger than in the agricultural area. The contaminated site exhibited significant short-range spatial variability in elemental composition, which resulted in sampling uncertainties of 20-30%.

  17. The effect of short-range spatial variability on soil sampling uncertainty.

    Science.gov (United States)

    Van der Perk, Marcel; de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Fajgelj, Ales; Sansone, Umberto; Jeran, Zvonka; Jaćimović, Radojko

    2008-11-01

    This paper aims to quantify the soil sampling uncertainty arising from the short-range spatial variability of elemental concentrations in the topsoils of agricultural, semi-natural, and contaminated environments. For the agricultural site, the relative standard sampling uncertainty ranges between 1% and 5.5%. For the semi-natural area, the sampling uncertainties are 2-4 times larger than in the agricultural area. The contaminated site exhibited significant short-range spatial variability in elemental composition, which resulted in sampling uncertainties of 20-30%.

  18. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    International Nuclear Information System (INIS)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min

    2015-01-01

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments

  19. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments.

  20. Effect of the grain size of the soil on the measured activity and variation in activity in surface and subsurface soil samples

    International Nuclear Information System (INIS)

    Sulaiti, H.A.; Rega, P.H.; Bradley, D.; Dahan, N.A.; Mugren, K.A.; Dosari, M.A.

    2014-01-01

    Correlation between grain size and activity concentrations of soils and concentrations of various radionuclides in surface and subsurface soils has been measured for samples taken in the State of Qatar by gamma-spectroscopy using a high purity germanium detector. From the obtained gamma-ray spectra, the activity concentrations of the 238U (226Ra) and /sup 232/ Th (/sup 228/ Ac) natural decay series, the long-lived naturally occurring radionuclide 40 K and the fission product radionuclide 137CS have been determined. Gamma dose rate, radium equivalent, radiation hazard index and annual effective dose rates have also been estimated from these data. In order to observe the effect of grain size on the radioactivity of soil, three grain sizes were used i.e., smaller than 0.5 mm; smaller than 1 mm and greater than 0.5 mm; and smaller than 2 mm and greater than 1 mm. The weighted activity concentrations of the 238U series nuclides in 0.5-2 mm grain size of sample numbers was found to vary from 2.5:f:0.2 to 28.5+-0.5 Bq/kg, whereas, the weighted activity concentration of 4 degree K varied from 21+-4 to 188+-10 Bq/kg. The weighted activity concentrations of 238U series and 4 degree K have been found to be higher in the finest grain size. However, for the 232Th series, the activity concentrations in the 1-2 mm grain size of one sample were found to be higher than in the 0.5-1 mm grain size. In the study of surface and subsurface soil samples, the activity concentration levels of 238 U series have been found to range from 15.9+-0.3 to 24.1+-0.9 Bq/kg, in the surface soil samples (0-5 cm) and 14.5+-0.3 to 23.6+-0.5 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 232Th series have been found to lie in the range 5.7+-0.2 to 13.7+-0.5 Bq/kg, in the surface soil samples (0-5 cm)and 4.1+-0.2 to 15.6+-0.3 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 4 degree K were in the range 150+-8 to 290+-17 Bq/kg, in the surface

  1. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  2. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  3. Measurements of Plutonium and Americium in Soil Samples from Project 57 using the Suspended Soil Particle Sizing System (SSPSS)

    International Nuclear Information System (INIS)

    John L. Bowen; Rowena Gonzalez; David S. Shafer

    2001-01-01

    As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site

  4. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  6. Lineages that cheat death: surviving the squeeze on range size.

    Science.gov (United States)

    Waldron, Anthony

    2010-08-01

    Evolutionary lineages differ greatly in their net diversification rates, implying differences in rates of extinction and speciation. Lineages with a large average range size are commonly thought to have reduced extinction risk (although linking low extinction to high diversification has proved elusive). However, climate change cycles can dramatically reduce the geographic range size of even widespread species, and so most species may be periodically reduced to a few populations in small, isolated remnants of their range. This implies a high and synchronous extinction risk for the remaining populations, and so for the species as a whole. Species will only survive through these periods if their individual populations are "threat tolerant," somehow able to persist in spite of the high extinction risk. Threat tolerance is conceptually different from classic extinction resistance, and could theoretically have a stronger relationship with diversification rates than classic resistance. I demonstrate that relationship using primates as a model. I also show that narrowly distributed species have higher threat tolerance than widespread ones, confirming that tolerance is an unusual form of resistance. Extinction resistance may therefore operate by different rules during periods of adverse global environmental change than in more benign periods.

  7. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  8. Negative range size-abundance relationships in Indo-Pacific bird communities

    DEFF Research Database (Denmark)

    Hart Reeve, Andrew; Borregaard, Michael Krabbe; Fjeldså, Jon

    2016-01-01

    and environmental stability create selection pressures that favor narrowly specialized species, which could drive these non-positive relationships. To test this idea, we measured the range size-abundance relationships of eleven bird communities in mature and degraded forest on four islands in the Indo...

  9. Estimated ventricle size using Evans index: reference values from a population-based sample.

    Science.gov (United States)

    Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C

    2017-03-01

    Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.

  10. Evolutionary patterns of range size, abundance and species richness in Amazonian angiosperm trees

    Directory of Open Access Journals (Sweden)

    Kyle Dexter

    2016-09-01

    Full Text Available Amazonian tree species vary enormously in their total abundance and range size, while Amazonian tree genera vary greatly in species richness. The drivers of this variation are not well understood. Here, we construct a phylogenetic hypothesis that represents half of Amazonian tree genera in order to contribute to explaining the variation. We find several clear, broad-scale patterns. Firstly, there is significant phylogenetic signal for all three characteristics; closely related genera tend to have similar numbers of species and similar mean range size and abundance. Additionally, the species richness of genera shows a significant, negative relationship with the mean range size and abundance of their constituent species. Our results suggest that phylogenetically correlated intrinsic factors, namely traits of the genera themselves, shape among lineage variation in range size, abundance and species richness. We postulate that tree stature may be one particularly relevant trait. However, other traits may also be relevant, and our study reinforces the need for ambitious compilations of trait data for Amazonian trees. In the meantime, our study shows how large-scale phylogenies can help to elucidate, and contribute to explaining, macroecological and macroevolutionary patterns in hyperdiverse, yet poorly understood regions like the Amazon Basin.

  11. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  12. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  13. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  14. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  15. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  16. Assessment of Sampling Error Associated with Collection and Analysis of Soil Samples at a Firing Range Contaminated with HMX

    National Research Council Canada - National Science Library

    Jenkins, Thomas F

    1997-01-01

    Short-range and mid-range (grid size) spatial heterogeneity in explosives concentrations within surface soils was studied at an active antitank firing range at the Canadian Force Base-Valcartier, Val-Belair, Quebec...

  17. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  18. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  19. Whitebark pine, population density, and home-range size of grizzly bears in the greater yellowstone ecosystem.

    Directory of Open Access Journals (Sweden)

    Daniel D Bjornlie

    Full Text Available Changes in life history traits of species can be an important indicator of potential factors influencing populations. For grizzly bears (Ursus arctos in the Greater Yellowstone Ecosystem (GYE, recent decline of whitebark pine (WBP; Pinus albicaulis, an important fall food resource, has been paired with a slowing of population growth following two decades of robust population increase. These observations have raised questions whether resource decline or density-dependent processes may be associated with changes in population growth. Distinguishing these effects based on changes in demographic rates can be difficult. However, unlike the parallel demographic responses expected from both decreasing food availability and increasing population density, we hypothesized opposing behavioral responses of grizzly bears with regard to changes in home-range size. We used the dynamic changes in food resources and population density of grizzly bears as a natural experiment to examine hypotheses regarding these potentially competing influences on grizzly bear home-range size. We found that home-range size did not increase during the period of whitebark pine decline and was not related to proportion of whitebark pine in home ranges. However, female home-range size was negatively associated with an index of population density. Our data indicate that home-range size of grizzly bears in the GYE is not associated with availability of WBP, and, for female grizzly bears, increasing population density may constrain home-range size.

  20. Whitebark pine, population density, and home-range size of grizzly bears in the greater Yellowstone ecosystem

    Science.gov (United States)

    Bjornlie, Daniel D.; van Manen, Frank T.; Ebinger, Michael R.; Haroldson, Mark A.; Thompson, Daniel J.; Costello, Cecily M.

    2014-01-01

    Changes in life history traits of species can be an important indicator of potential factors influencing populations. For grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem (GYE), recent decline of whitebark pine (WBP; Pinus albicaulis), an important fall food resource, has been paired with a slowing of population growth following two decades of robust population increase. These observations have raised questions whether resource decline or density-dependent processes may be associated with changes in population growth. Distinguishing these effects based on changes in demographic rates can be difficult. However, unlike the parallel demographic responses expected from both decreasing food availability and increasing population density, we hypothesized opposing behavioral responses of grizzly bears with regard to changes in home-range size. We used the dynamic changes in food resources and population density of grizzly bears as a natural experiment to examine hypotheses regarding these potentially competing influences on grizzly bear home-range size. We found that home-range size did not increase during the period of whitebark pine decline and was not related to proportion of whitebark pine in home ranges. However, female home-range size was negatively associated with an index of population density. Our data indicate that home-range size of grizzly bears in the GYE is not associated with availability of WBP, and, for female grizzly bears, increasing population density may constrain home-range size.

  1. Whitebark pine, population density, and home-range size of grizzly bears in the greater yellowstone ecosystem.

    Science.gov (United States)

    Bjornlie, Daniel D; Van Manen, Frank T; Ebinger, Michael R; Haroldson, Mark A; Thompson, Daniel J; Costello, Cecily M

    2014-01-01

    Changes in life history traits of species can be an important indicator of potential factors influencing populations. For grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem (GYE), recent decline of whitebark pine (WBP; Pinus albicaulis), an important fall food resource, has been paired with a slowing of population growth following two decades of robust population increase. These observations have raised questions whether resource decline or density-dependent processes may be associated with changes in population growth. Distinguishing these effects based on changes in demographic rates can be difficult. However, unlike the parallel demographic responses expected from both decreasing food availability and increasing population density, we hypothesized opposing behavioral responses of grizzly bears with regard to changes in home-range size. We used the dynamic changes in food resources and population density of grizzly bears as a natural experiment to examine hypotheses regarding these potentially competing influences on grizzly bear home-range size. We found that home-range size did not increase during the period of whitebark pine decline and was not related to proportion of whitebark pine in home ranges. However, female home-range size was negatively associated with an index of population density. Our data indicate that home-range size of grizzly bears in the GYE is not associated with availability of WBP, and, for female grizzly bears, increasing population density may constrain home-range size.

  2. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  3. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  4. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  5. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  6. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  7. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  8. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  9. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  10. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  11. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  12. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  13. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  14. Home range size variation in female arctic grizzly bears relative to reproductive status and resource availability.

    Science.gov (United States)

    Edwards, Mark A; Derocher, Andrew E; Nagy, John A

    2013-01-01

    The area traversed in pursuit of resources defines the size of an animal's home range. For females, the home range is presumed to be a function of forage availability. However, the presence of offspring may also influence home range size due to reduced mobility, increased nutritional need, and behavioral adaptations of mothers to increase offspring survival. Here, we examine the relationship between resource use and variation in home range size for female barren-ground grizzly bears (Ursus arctos) of the Mackenzie Delta region in Arctic Canada. We develop methods to test hypotheses of home range size that address selection of cover where cover heterogeneity is low, using generalized linear mixed-effects models and an information-theoretic approach. We found that the reproductive status of female grizzlies affected home range size but individually-based spatial availability of highly selected cover in spring and early summer was a stronger correlate. If these preferred covers in spring and early summer, a period of low resource availability for grizzly bears following den-emergence, were patchy and highly dispersed, females travelled farther regardless of the presence or absence of offspring. Increased movement to preferred covers, however, may result in greater risk to the individual or family.

  15. Home range size variation in female arctic grizzly bears relative to reproductive status and resource availability.

    Directory of Open Access Journals (Sweden)

    Mark A Edwards

    Full Text Available The area traversed in pursuit of resources defines the size of an animal's home range. For females, the home range is presumed to be a function of forage availability. However, the presence of offspring may also influence home range size due to reduced mobility, increased nutritional need, and behavioral adaptations of mothers to increase offspring survival. Here, we examine the relationship between resource use and variation in home range size for female barren-ground grizzly bears (Ursus arctos of the Mackenzie Delta region in Arctic Canada. We develop methods to test hypotheses of home range size that address selection of cover where cover heterogeneity is low, using generalized linear mixed-effects models and an information-theoretic approach. We found that the reproductive status of female grizzlies affected home range size but individually-based spatial availability of highly selected cover in spring and early summer was a stronger correlate. If these preferred covers in spring and early summer, a period of low resource availability for grizzly bears following den-emergence, were patchy and highly dispersed, females travelled farther regardless of the presence or absence of offspring. Increased movement to preferred covers, however, may result in greater risk to the individual or family.

  16. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  18. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  19. Comparison of fluvial suspended-sediment concentrations and particle-size distributions measured with in-stream laser diffraction and in physical samples

    Science.gov (United States)

    Czuba, Jonathan A.; Straub, Timothy D.; Curran, Christopher A.; Landers, Mark N.; Domanski, Marian M.

    2015-01-01

    Laser-diffraction technology, recently adapted for in-stream measurement of fluvial suspended-sediment concentrations (SSCs) and particle-size distributions (PSDs), was tested with a streamlined (SL), isokinetic version of the Laser In-Situ Scattering and Transmissometry (LISST) for measuring volumetric SSCs and PSDs ranging from 1.8-415 µm in 32 log-spaced size classes. Measured SSCs and PSDs from the LISST-SL were compared to a suite of 22 datasets (262 samples in all) of concurrent suspended-sediment and streamflow measurements using a physical sampler and acoustic Doppler current profiler collected during 2010-12 at 16 U.S. Geological Survey streamflow-gaging stations in Illinois and Washington (basin areas: 38 – 69,264 km2). An unrealistically low computed effective density (mass SSC / volumetric SSC) of 1.24 g/ml (95% confidence interval: 1.05-1.45 g/ml) provided the best-fit value (R2 = 0.95; RMSE = 143 mg/L) for converting volumetric SSC to mass SSC for over 2 orders of magnitude of SSC (12-2,170 mg/L; covering a substantial range of SSC that can be measured by the LISST-SL) despite being substantially lower than the sediment particle density of 2.67 g/ml (range: 2.56-2.87 g/ml, 23 samples). The PSDs measured by the LISST-SL were in good agreement with those derived from physical samples over the LISST-SL's measureable size range. Technical and operational limitations of the LISST-SL are provided to facilitate the collection of more accurate data in the future. Additionally, the spatial and temporal variability of SSC and PSD measured by the LISST-SL is briefly described to motivate its potential for advancing our understanding of suspended-sediment transport by rivers.

  20. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  1. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  2. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  3. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  4. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  5. Speciation in little: the role of range and body size in the diversification of Malagasy mantellid frogs

    Directory of Open Access Journals (Sweden)

    Vences Miguel

    2011-07-01

    Full Text Available Abstract Background The rate and mode of lineage diversification might be shaped by clade-specific traits. In Madagascar, many groups of organisms are characterized by tiny distribution ranges and small body sizes, and this high degree of microendemism and miniaturization parallels a high species diversity in some of these groups. We here investigate the geographic patterns characterizing the radiation of the frog family Mantellidae that is virtually endemic to Madagascar. We integrate a newly reconstructed near-complete species-level timetree of the Mantellidae with georeferenced distribution records and maximum male body size data to infer the influence of these life-history traits on each other and on mantellid diversification. Results We reconstructed a molecular phylogeny based on nuclear and mitochondrial DNA for 257 species and candidate species of the mantellid frog radiation. Based on this phylogeny we identified 53 well-supported pairs of sister species that we used for phylogenetic comparative analyses, along with whole tree-based phylogenetic comparative methods. Sister species within the Mantellidae diverged at 0.2-14.4 million years ago and more recently diverged sister species had geographical range centroids more proximate to each other, independently of their current sympatric or allopatric occurrence. The largest number of sister species pairs had non-overlapping ranges, but several examples of young microendemic sister species occurring in full sympatry suggest the possibility of non-allopatric speciation. Range sizes of species included in the sister species comparisons increased with evolutionary age, as did range size differences between sister species, which rejects peripatric speciation. For the majority of mantellid sister species and the whole mantellid radiation, range and body sizes were associated with each other and small body sizes were linked to higher mitochondrial nucleotide substitution rates and higher clade

  6. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  7. Assessment of helminth load in faecal samples of free range ...

    African Journals Online (AJOL)

    Helminths load in faecal sample of free range indigenous chicken in Port Harcourt Metropolis was examined. Faecal samples were collected from 224 birds in 15 homesteads and 4 major markets - Mile 3, Mile 1, Borokiri and Eneka Village market where poultry birds are gathered for sale. 0.2-0.5g of feacal sample was ...

  8. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  9. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  10. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  11. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  12. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  13. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  14. Mechanism of long-range penetration of low-energy ions in botanic samples

    International Nuclear Information System (INIS)

    Liu Feng; Wang Yugang; Xue Jianming; Wang Sixue; Du Guanghua; Yan Sha; Zhao Weijiang

    2002-01-01

    The authors present experimental evidence to reveal the mechanism of long-range penetration of low-energy ions in botanic samples. In the 100 keV Ar + ion transmission measurement, the result confirmed that low-energy ions could penetrate at least 60 μm thick kidney bean slices with the probability of about 1.0 x 10 -5 . The energy spectrum of 1 MeV He + ions penetrating botanic samples has shown that there is a peak of the count of ions with little energy loss. The probability of the low-energy ions penetrating the botanic sample is almost the same as that of the high-energy ions penetrating the same samples with little energy loss. The results indicate that there are some micro-regions with mass thickness less than the projectile range of low-energy ions in the botanic samples and they result in the long-range penetration of low-energy ions in botanic samples

  15. Approach for measuring the chemistry of individual particles in the size range critical for cloud formation.

    Science.gov (United States)

    Zauscher, Melanie D; Moore, Meagan J K; Lewis, Gregory S; Hering, Susanne V; Prather, Kimberly A

    2011-03-15

    Aerosol particles, especially those ranging from 50 to 200 nm, strongly impact climate by serving as nuclei upon which water condenses and cloud droplets form. However, the small number of analytical methods capable of measuring the composition of particles in this size range, particularly at the individual particle level, has limited our knowledge of cloud condensation nuclei (CCN) composition and hence our understanding of aerosols effect on climate. To obtain more insight into particles in this size range, we developed a method which couples a growth tube (GT) to an ultrafine aerosol time-of-flight mass spectrometer (UF-ATOFMS), a combination that allows in situ measurements of the composition of individual particles as small as 38 nm. The growth tube uses water to grow particles to larger sizes so they can be optically detected by the UF-ATOFMS, extending the size range to below 100 nm with no discernible changes in particle composition. To gain further insight into the temporal variability of aerosol chemistry and sources, the GT-UF-ATOFMS was used for online continuous measurements over a period of 3 days.

  16. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  17. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  18. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  19. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  20. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  1. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  2. Predictable variation of range-sizes across an extreme environmental gradient in a lizard adaptive radiation: evolutionary and ecological inferences.

    Directory of Open Access Journals (Sweden)

    Daniel Pincheira-Donoso

    Full Text Available Large-scale patterns of current species geographic range-size variation reflect historical dynamics of dispersal and provide insights into future consequences under changing environments. Evidence suggests that climate warming exerts major damage on high latitude and elevation organisms, where changes are more severe and available space to disperse tracking historical niches is more limited. Species with longer generations (slower adaptive responses, such as vertebrates, and with restricted distributions (lower genetic diversity, higher inbreeding in these environments are expected to be particularly threatened by warming crises. However, a well-known macroecological generalization (Rapoport's rule predicts that species range-sizes increase with increasing latitude-elevation, thus counterbalancing the impact of climate change. Here, I investigate geographic range-size variation across an extreme environmental gradient and as a function of body size, in the prominent Liolaemus lizard adaptive radiation. Conventional and phylogenetic analyses revealed that latitudinal (but not elevational ranges significantly decrease with increasing latitude-elevation, while body size was unrelated to range-size. Evolutionarily, these results are insightful as they suggest a link between spatial environmental gradients and range-size evolution. However, ecologically, these results suggest that Liolaemus might be increasingly threatened if, as predicted by theory, ranges retract and contract continuously under persisting climate warming, potentially increasing extinction risks at high latitudes and elevations.

  3. Effects of corridors on home range sizes and interpatch movements of three small mammal species.

    Energy Technology Data Exchange (ETDEWEB)

    Mabry, Karen, E.; Barrett, Gary, W.

    2002-04-30

    Mabry, K.E., and G.W. Barrett. 2002. Effects of corridors on home range sizes and interpatch movements of three small mammal species. Landscape Ecol. 17:629-636. Corridors are predicted to benefit populations in patchy habitats by promoting movement, which should increase population densities, gene flow, and recolonization of extinct patch populations. However, few investigators have considered use of the total landscape, particularly the possibility of interpatch movement through matrix habitat, by small mammals. This study compares home range sizes of 3 species of small mammals, the cotton mouse, old field mouse and cotton rat between patches with and without corridors. Corridor presence did not have a statistically significant influence on average home range size. Habitat specialization and sex influenced the probability of an individual moving between 2 patches without corridors. The results of this study suggest that small mammals may be more capable of interpatch movement without corridors than is frequently assumed.

  4. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  5. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  6. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  8. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  9. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  10. Major- and trace elements in grain size fractions of the Apollo-17 core of the drilled sample 74001

    International Nuclear Information System (INIS)

    Kraehenbuehl, U.; Gunten, H.R. von; Jost, D.; Meyer, G.; Wegmueller, F.

    1980-01-01

    Two layers of a drill sample were examined, one from a depth of 38 cm and the other from 58 cm depth. Neutron activation analysis was used for one group of elements, and radiochemical analysis for another. Over a range of grain size from 36 to 450 μm, the trace elements U, Co, and La were found to uniformly distributed, as was iron. The top layer consistently showed a 5-8% higher content. The volatile trace elements Ge and Cd were found to be enriched in the smaller grain sizes. This contradicts previous assumptions of an enrichment of the more volatile elements in top layers owing to more rapid cooling of volcanic eruptions. (R.S.)

  11. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  12. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  13. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    Energy Technology Data Exchange (ETDEWEB)

    Plionis, Alexander A [Los Alamos National Laboratory; Peterson, Dominic S [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory; Lamont, Stephen P [Los Alamos National Laboratory

    2009-01-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  14. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  15. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  16. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  17. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  19. Evaluation of FASP, SP3, and iST Protocols for Proteomic Sample Preparation in the Low Microgram Range.

    Science.gov (United States)

    Sielaff, Malte; Kuharev, Jörg; Bohn, Toszka; Hahlbrock, Jennifer; Bopp, Tobias; Tenzer, Stefan; Distler, Ute

    2017-11-03

    Efficient and reproducible sample preparation is a prerequisite for any robust and sensitive quantitative bottom-up proteomics workflow. Here, we performed an independent comparison between single-pot solid-phase-enhanced sample preparation (SP3), filter-aided sample preparation (FASP), and a commercial kit based on the in-StageTip (iST) method. We assessed their performance for the processing of proteomic samples in the low μg range using varying amounts of HeLa cell lysate (1-20 μg of total protein). All three workflows showed similar performances for 20 μg of starting material. When handling sample sizes below 10 μg, the number of identified proteins and peptides as well as the quantitative reproducibility and precision drastically dropped in case of FASP. In contrast, SP3 and iST provided high proteome coverage even in the low μg range. Even when digesting 1 μg of starting material, both methods still enabled the identification of over 3000 proteins and between 25 000 and 30 000 peptides. On average, the quantitative reproducibility between experimental replicates was slightly higher in case of SP3 (R 2 = 0.97 (SP3); R 2 = 0.93 (iST)). Applying SP3 toward the characterization of the proteome of FACS-sorted tumor-associated macrophages in the B16 tumor model enabled the quantification of 2965 proteins and revealed a "mixed" M1/M2 phenotype.

  20. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  1. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  2. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  3. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  4. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  5. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  6. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Threatened species richness along a Himalayan elevational gradient: quantifying the influences of human population density, range size, and geometric constraints.

    Science.gov (United States)

    Paudel, Prakash Kumar; Sipos, Jan; Brodie, Jedediah F

    2018-02-07

    A crucial step in conserving biodiversity is to identify the distributions of threatened species and the factors associated with species threat status. In the biodiversity hotspot of the Himalaya, very little is known about which locations harbour the highest diversity of threatened species and whether diversity of such species is related to area, mid-domain effects (MDE), range size, or human density. In this study, we assessed the drivers of variation in richness of threatened birds, mammals, reptiles, actinopterygii, and amphibians along an elevational gradient in Nepal Himalaya. Although geometric constraints (MDE), species range size, and human population density were significantly related to threatened species richness, the interaction between range size and human population density was of greater importance. Threatened species richness was positively associated with human population density and negatively associated with range size. In areas with high richness of threatened species, species ranges tend to be small. The preponderance of species at risk of extinction at low elevations in the subtropical biodiversity hotspot could be due to the double impact of smaller range sizes and higher human density.

  8. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  9. Influence of primary prey on home-range size and habitat-use patterns of northern spotted owls (Strix occidentalis caurina)

    Science.gov (United States)

    Cynthia J. Zabel; Kevin S. McKelvey; James P. Ward

    1995-01-01

    Correlations between the home-range size of northern spotted owls (Strix occidentalis caurina) and proportion of their range in old-growth forest have been reported, but there are few data on the relationship between their home-range size and prey. The primary prey of spotted owls are wood rats and northern flying squirrels (Glaucomys sabrinus). Wood...

  10. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  11. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  12. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  13. Intrapopulational body size variation and cranial capacity variation in Middle Pleistocene humans: the Sima de los Huesos sample (Sierra de Atapuerca, Spain).

    Science.gov (United States)

    Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I

    1998-05-01

    A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.

  14. Nanoparticle Analysis by Online Comprehensive Two-Dimensional Liquid Chromatography combining Hydrodynamic Chromatography and Size-Exclusion Chromatography with Intermediate Sample Transformation

    Science.gov (United States)

    2017-01-01

    Polymeric nanoparticles have become indispensable in modern society with a wide array of applications ranging from waterborne coatings to drug-carrier-delivery systems. While a large range of techniques exist to determine a multitude of properties of these particles, relating physicochemical properties of the particle to the chemical structure of the intrinsic polymers is still challenging. A novel, highly orthogonal separation system based on comprehensive two-dimensional liquid chromatography (LC × LC) has been developed. The system combines hydrodynamic chromatography (HDC) in the first-dimension to separate the particles based on their size, with ultrahigh-performance size-exclusion chromatography (SEC) in the second dimension to separate the constituting polymer molecules according to their hydrodynamic radius for each of 80 to 100 separated fractions. A chip-based mixer is incorporated to transform the sample by dissolving the separated nanoparticles from the first-dimension online in tetrahydrofuran. The polymer bands are then focused using stationary-phase-assisted modulation to enhance sensitivity, and the water from the first-dimension eluent is largely eliminated to allow interaction-free SEC. Using the developed system, the combined two-dimensional distribution of the particle-size and the molecular-size of a mixture of various polystyrene (PS) and polyacrylate (PACR) nanoparticles has been obtained within 60 min. PMID:28745485

  15. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  16. Effective sampling range of a synthetic protein-based attractant for Ceratitis capitata (Diptera: Tephritidae).

    Science.gov (United States)

    Epsky, Nancy D; Espinoza, Hernán R; Kendra, Paul E; Abernathy, Robert; Midgarden, David; Heath, Robert R

    2010-10-01

    Studies were conducted in Honduras to determine effective sampling range of a female-targeted protein-based synthetic attractant for the Mediterranean fruit fly, Ceratitis capitata (Wiedemann) (Diptera: Tephritidae). Multilure traps were baited with ammonium acetate, putrescine, and trimethylamine lures (three-component attractant) and sampled over eight consecutive weeks. Field design consisted of 38 traps (over 0.5 ha) placed in a combination of standard and high-density grids to facilitate geostatistical analysis, and tests were conducted in coffee (Coffea arabica L.),mango (Mangifera indica L.),and orthanique (Citrus sinensis X Citrus reticulata). Effective sampling range, as determined from the range parameter obtained from experimental variograms that fit a spherical model, was approximately 30 m for flies captured in tests in coffee or mango and approximately 40 m for flies captured in orthanique. For comparison, a release-recapture study was conducted in mango using wild (field-collected) mixed sex C. capitata and an array of 20 baited traps spaced 10-50 m from the release point. Contour analysis was used to document spatial distribution of fly recaptures and to estimate effective sampling range, defined by the area that encompassed 90% of the recaptures. With this approach, effective range of the three-component attractant was estimated to be approximately 28 m, similar to results obtained from variogram analysis. Contour maps indicated that wind direction had a strong influence on sampling range, which was approximately 15 m greater upwind compared with downwind from the release point. Geostatistical analysis of field-captured insects in appropriately designed trapping grids may provide a supplement or alternative to release-recapture studies to estimate sampling ranges for semiochemical-based trapping systems.

  17. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    Science.gov (United States)

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  18. What about N? A methodological study of sample-size reporting in focus group studies.

    Science.gov (United States)

    Carlsen, Benedicte; Glenton, Claire

    2011-03-11

    Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these

  19. What about N? A methodological study of sample-size reporting in focus group studies

    Directory of Open Access Journals (Sweden)

    Glenton Claire

    2011-03-01

    Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method

  20. Hedgehogs on the move: Testing the effects of land use change on home range size and movement patterns of free-ranging Ethiopian hedgehogs.

    Science.gov (United States)

    Abu Baker, Mohammad A; Reeve, Nigel; Conkey, April A T; Macdonald, David W; Yamaguchi, Nobuyuki

    2017-01-01

    Degradation and alteration of natural environments because of agriculture and other land uses have major consequences on vertebrate populations, particularly on spatial organization and movement patterns. We used GPS tracking to study the effect of land use and sex on the home range size and movement of a typical model species, the Ethiopian hedgehogs. We used free-ranging hedgehogs from two areas with different land use practices: 24 from an area dominated by irrigated farms (12 ♂♂, 12 ♀♀) and 22 from a natural desert environment within a biosphere reserve (12 ♂♂, 10 ♀♀). Animals were significantly heavier in the resource-rich irrigated farms area (417.71 ±12.77SE g) in comparison to the natural desert area (376.37±12.71SE g). Both habitat and sex significantly influenced the home range size of hedgehogs. Home ranges were larger in the reserve than in the farms area. Total home ranges averaged 103 ha (±17 SE) for males and 42 ha (±11SE) for females in the farms area, but were much larger in the reserve averaging 230 ha (±33 SE) for males and 150 ha (±29 SE) for females. The home ranges of individuals of both sexes overlapped. Although females were heavier than males, body weight had no effect on home range size. The results suggest that resources provided in the farms (e.g. food, water, and shelters) influenced animal density and space use. Females aggregated around high-resource areas (either farms or rawdhats), whereas males roamed over greater distances, likely in search of mating opportunities to maximize reproductive success. Most individual home ranges overlapped with many other individuals of either sex, suggesting a non-territorial, promiscuous mating. Patterns of space use and habitat utilization are key factors in shaping aspects of reproductive biology and mating system. To minimize the impacts of agriculture on local wildlife, we recommend that biodiversity-friendly agro-environmental schemes be introduced in the Middle East where

  1. Hedgehogs on the move: Testing the effects of land use change on home range size and movement patterns of free-ranging Ethiopian hedgehogs.

    Directory of Open Access Journals (Sweden)

    Mohammad A Abu Baker

    Full Text Available Degradation and alteration of natural environments because of agriculture and other land uses have major consequences on vertebrate populations, particularly on spatial organization and movement patterns. We used GPS tracking to study the effect of land use and sex on the home range size and movement of a typical model species, the Ethiopian hedgehogs. We used free-ranging hedgehogs from two areas with different land use practices: 24 from an area dominated by irrigated farms (12 ♂♂, 12 ♀♀ and 22 from a natural desert environment within a biosphere reserve (12 ♂♂, 10 ♀♀. Animals were significantly heavier in the resource-rich irrigated farms area (417.71 ±12.77SE g in comparison to the natural desert area (376.37±12.71SE g. Both habitat and sex significantly influenced the home range size of hedgehogs. Home ranges were larger in the reserve than in the farms area. Total home ranges averaged 103 ha (±17 SE for males and 42 ha (±11SE for females in the farms area, but were much larger in the reserve averaging 230 ha (±33 SE for males and 150 ha (±29 SE for females. The home ranges of individuals of both sexes overlapped. Although females were heavier than males, body weight had no effect on home range size. The results suggest that resources provided in the farms (e.g. food, water, and shelters influenced animal density and space use. Females aggregated around high-resource areas (either farms or rawdhats, whereas males roamed over greater distances, likely in search of mating opportunities to maximize reproductive success. Most individual home ranges overlapped with many other individuals of either sex, suggesting a non-territorial, promiscuous mating. Patterns of space use and habitat utilization are key factors in shaping aspects of reproductive biology and mating system. To minimize the impacts of agriculture on local wildlife, we recommend that biodiversity-friendly agro-environmental schemes be introduced in the Middle

  2. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  3. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  4. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  5. Size-Related Differences in the Thermoregulatory Habits of Free-Ranging Komodo Dragons

    Directory of Open Access Journals (Sweden)

    Henry J. Harlow

    2010-01-01

    Full Text Available Thermoregulatory processes were compared among three-size groups of free-ranging Komodo dragons (Varanus komodoensis comprising small (5–20 kg, medium (20–40 gm and large (40–70 kg lizards. While all size groups maintained a similar preferred body temperature of ≈35∘C, they achieved this end point differently. Small dragons appeared to engage in sun shuttling behavior more vigorously than large dragons as represented by their greater frequency of daily ambient temperature and light intensity changes as well as a greater activity and overall exposure to the sun. Large dragons were more sedentary and sun shuttled less. Further, they appear to rely to a greater extent on microhabitat selection and employed mouth gaping evaporative cooling to maintain their preferred operational temperature and prevent overheating. A potential ecological consequence of size-specific thermoregulatory habits for dragons is separation of foraging areas. In part, differences in thermoregulation could contribute to inducing shifts in predatory strategies from active foraging in small dragons to more sedentary sit-and-wait ambush predators in adults.

  6. Evaluation of 1H NMR relaxometry for the assessment of pore size distribution in soil samples

    NARCIS (Netherlands)

    Jaeger, F.; Bowe, S.; As, van H.; Schaumann, G.E.

    2009-01-01

    1H NMR relaxometry is used in earth science as a non-destructive and time-saving method to determine pore size distributions (PSD) in porous media with pore sizes ranging from nm to mm. This is a broader range than generally reported for results from X-ray computed tomography (X-ray CT) scanning,

  7. Among-Individual Variation in Desert Iguanas (Squamata: Dipsosaurus dorsalis): Endurance Capacity Is Positively Related to Home Range Size.

    Science.gov (United States)

    Singleton, Jennifer M; Garland, Theodore

    Among species of lizards, endurance capacity measured on a motorized treadmill is positively related to daily movement distance and time spent moving, but few studies have addressed such relationships at the level of individual variation within a sex and age category in a single population. Both endurance capacity and home range size show substantial individual variation in lizards, rendering them suitable for such studies. We predicted that these traits would be positively related because endurance capacity is one of the factors that has the potential to limit home range size. We measured the endurance capacity and home range size of adult male desert iguanas (Dipsosaurus dorsalis). Lizards were field captured for measurements of endurance, and home range data were gathered using visual identification of previously marked individuals. Endurance was significantly repeatable between replicate trials, conducted 1-17 d apart ([Formula: see text] for log-transformed values, [Formula: see text], [Formula: see text]). The log of the higher of two endurance trials was positively but not significantly related to log body mass. The log of home range area was positively but not significantly related to log body mass, the number of sightings, or the time span from first to last sighting. As predicted, log endurance was positively correlated with log home range area ([Formula: see text], [Formula: see text], one-tailed [Formula: see text]; for body-mass residual endurance values: [Formula: see text], one-tailed [Formula: see text]). These results suggest that endurance capacity may have a permissive effect on home range size. Alternatively, individuals with larger home ranges may experience training effects (phenotypic plasticity) that increase their endurance.

  8. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  9. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  10. Size exclusion chromatography with online ICP-MS enables molecular weight fractionation of dissolved phosphorus species in water samples.

    Science.gov (United States)

    Venkatesan, Arjun K; Gan, Wenhui; Ashani, Harsh; Herckes, Pierre; Westerhoff, Paul

    2018-04-15

    Phosphorus (P) is an important and often limiting element in terrestrial and aquatic ecosystem. A lack of understanding of its distribution and structures in the environment limits the design of effective P mitigation and recovery approaches. Here we developed a robust method employing size exclusion chromatography (SEC) coupled to an ICP-MS to determine the molecular weight (MW) distribution of P in environmental samples. The most abundant fraction of P varied widely in different environmental samples: (i) orthophosphate was the dominant fraction (93-100%) in one lake, two aerosols and DOC isolate samples, (ii) species of 400-600 Da range were abundant (74-100%) in two surface waters, and (iii) species of 150-350 Da range were abundant in wastewater effluents. SEC-DOC of the aqueous samples using a similar SEC column showed overlapping peaks for the 400-600 Da species in two surface waters, and for >20 kDa species in the effluents, suggesting that these fractions are likely associated with organic matter. The MW resolution and performance of SEC-ICP-MS agreed well with the time integrated results obtained using conventional ultrafiltration method. Results show that SEC in combination with ICP-MS and DOC has the potential to be a powerful and easy-to-use method in identifying unknown fractions of P in the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  12. Online purchases of an expanded range of condom sizes in comparison to current dimensional requirements allowable by US national standards.

    Science.gov (United States)

    Cecil, Michael; Warner, Lee; Siegler, Aaron J

    2013-11-01

    Across studies, 35-50% of men describe condoms as fitting poorly. Rates of condom use may be inhibited in part due to the inaccessibility of appropriately sized condoms. As regulated medical devices, condom sizes conform to national standards such as those developed by the American Society for Testing and Materials (ASTM) or international standards such as those developed by the International Organisation for Standardisation (ISO). We describe the initial online sales experience of an expanded range of condom sizes and assess uptake in relation to the current required standard dimensions of condoms. Data regarding the initial 1000 sales of an expanded range of condom sizes in the United Kingdom were collected from late 2011 through to early 2012. Ninety-five condom sizes, comprising 14 lengths (83-238mm) and 12 widths (41-69mm), were available. For the first 1000 condom six-pack units that were sold, a total of 83 of the 95 unique sizes were purchased, including all 14 lengths and 12 widths, and both the smallest and largest condoms. Initial condom purchases were made by 572 individuals from 26 countries. Only 13.4% of consumer sales were in the ASTM's allowable range of sizes. These initial sales data suggest consumer interest in an expanded choice of condom sizes that fall outside the range currently allowable by national and international standards organisations.

  13. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. Table of sample sizes needed to detect at least one defective with 100(1-α)% probability (α = 0.01, 0.05)

    International Nuclear Information System (INIS)

    Stewart, K.B.

    1972-01-01

    Tables are presented which give the random sample size needed in order to be 95 percent(99 percent) certain of detecting at least one defective item when there are k defective items in a population of n items. The application of the tables to certain safeguards problems is discussed. The range of the tables is as follows: r = 0(1)25, n = r(1)r + 999. (U.S.)

  15. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  16. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  17. Atomic size effects on local coordination and medium range order in molten trivalent metal chlorides

    International Nuclear Information System (INIS)

    Tatlipinar, H.; Akdeniz, Z.; Pastore, G.

    1992-08-01

    Structural correlations in molten trivalent metal chlorides are evaluated as functions of the metal ion size R M across the range from LaCl 3 (R M approx. 1.4 A) to AlCl 3 (R M approx. 0.8 A), using a charged soft-sphere model and the hypernetted chain approximation. Main attention is given to trends in the local liquid structure (partial radial distribution functions, coordination numbers and bond lengths) and in the intermediate range order (first sharp diffraction peak in the number-number and partial structure factors). The trend towards fourfold local coordination of the metal ions, the stabilization of their first-neighbour chlorine cage and the growth of medium range order are found to proceed in parallel as the size of the metal ion is allowed to decrease at constant number density and temperature. A tendency to molecular-type local structure and liquid-vapour phase separation is found within the hypernetted chain scheme at small metal ion sizes corresponding to AlCl 3 and is emphasized by decreasing the number density of the fluid. The predicted molecular units are rather strongly distorted Al 2 Cl 6 dimers, in agreement with observation. The calculated structural trends for other trichlorides are compared with diffraction and transport data. (author). 17 refs, 8 figs, 1 tab

  18. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  19. Short versus long range interactions and the size of two-body weakly bound objects

    International Nuclear Information System (INIS)

    Lombard, R.J.; Volpe, C.

    2003-01-01

    Very weakly bound systems may manifest intriguing ''universal'' properties, independent of the specific interaction which keeps the system bound. An interesting example is given by relations between the size of the system and the separation energy, or scaling laws. So far, scaling laws have been investigated for short-range and long-range (repulsive) potentials. We report here on scaling laws for weakly bound two-body systems valid for a larger class of potentials, i.e. short-range potentials having a repulsive core and long-range attractive potentials. We emphasize analogies and differences between the short- and the long-range case. In particular, we show that the emergence of halos is a threshold phenomenon which can arise when the system is bound not only by short-range interactions but also by long-range ones, and this for any value of the orbital angular momentum l. These results enlarge the image of halo systems we are accustomed to. (orig.)

  20. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  1. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  2. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  3. Arecibo Radar Observation of Near-Earth Asteroids: Expanded Sample Size, Determination of Radar Albedos, and Measurements of Polarization Ratios

    Science.gov (United States)

    Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.

    2017-10-01

    The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we

  4. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  5. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  6. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  7. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  8. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  9. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  11. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  12. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  13. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  14. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...

  15. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  16. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  17. Mass size distribution of particle-bound water

    Science.gov (United States)

    Canepari, S.; Simonetti, G.; Perrino, C.

    2017-09-01

    The thermal-ramp Karl-Fisher method (tr-KF) for the determination of PM-bound water has been applied to size-segregated PM samples collected in areas subjected to different environmental conditions (protracted atmospheric stability, desert dust intrusion, urban atmosphere). This method, based on the use of a thermal ramp for the desorption of water from PM samples and the subsequent analysis by the coulometric KF technique, had been previously shown to differentiate water contributes retained with different strength and associated to different chemical components in the atmospheric aerosol. The application of the method to size-segregated samples has revealed that water showed a typical mass size distribution in each one of the three environmental situations that were taken into consideration. A very similar size distribution was shown by the chemical PM components that prevailed during each event: ammonium nitrate in the case of atmospheric stability, crustal species in the case of desert dust, road-dust components in the case of urban sites. The shape of the tr-KF curve varied according to the size of the collected particles. Considering the size ranges that better characterize the event (fine fraction for atmospheric stability, coarse fraction for dust intrusion, bi-modal distribution for urban dust), this shape is coherent with the typical tr-KF shape shown by water bound to the chemical species that predominate in the same PM size range (ammonium nitrate, crustal species, secondary/combustion species - road dust components).

  18. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  19. Sensitivity Range Analysis of Infrared (IR) Transmitter and Receiver Sensor to Detect Sample Position in Automatic Sample Changer

    International Nuclear Information System (INIS)

    Syirrazie Che Soh; Nolida Yussup; Nur Aira Abdul Rahman; Maslina Ibrahim

    2016-01-01

    Sensitivity range of IR Transmitter and Receiver Sensor influences the effectiveness of the sensor to detect position of a sample. Then the purpose of this analysis is to determine the suitable design and specification the electronic driver of the sensor to gain appropriate sensitivity range for required operation. The related activities to this analysis cover electronic design concept and specification, calibration of design specification and evaluation on design specification for required application. (author)

  20. The mechanical behavior of metal alloys with grain size distribution in a wide range of strain rates

    Science.gov (United States)

    Skripnyak, V. A.; Skripnyak, V. V.; Skripnyak, E. G.

    2017-12-01

    The paper discusses a multiscale simulation approach for the construction of grain structure of metals and alloys, providing high tensile strength with ductility. This work compares the mechanical behavior of light alloys and the influence of the grain size distribution in a wide range of strain rates. The influence of the grain size distribution on the inelastic deformation and fracture of aluminium and magnesium alloys is investigated by computer simulations in a wide range of strain rates. It is shown that the yield stress depends on the logarithm of the normalized strain rate for light alloys with a bimodal grain distribution and coarse-grained structure.

  1. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  2. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  3. Influences of landscape heterogeneity on home-range sizes of brown bears

    Science.gov (United States)

    Mangipane, Lindsey S.; Belant, Jerrold L.; Hiller, Tim L.; Colvin, Michael E.; Gustine, David; Mangipane, Buck A.; Hilderbrand, Grant V.

    2018-01-01

    Animal space use is influenced by many factors and can affect individual survival and fitness. Under optimal foraging theory, individuals use landscapes to optimize high-quality resources while minimizing the amount of energy used to acquire them. The spatial resource variability hypothesis states that as patchiness of resources increases, individuals use larger areas to obtain the resources necessary to meet energetic requirements. Additionally, under the temporal resource variability hypothesis, seasonal variation in available resources can reduce distances moved while providing a variety of food sources. Our objective was to determine if seasonal home ranges of brown bears (Ursus arctos) were influenced by temporal availability and spatial distribution of resources and whether individual reproductive status, sex, or size (i.e., body mass) mediated space use. To test our hypotheses, we radio collared brown bears (n = 32 [9 male, 23 female]) in 2014–2016 and used 18 a prioriselected linear models to evaluate seasonal utilization distributions (UD) in relation to our hypotheses. Our top-ranked model by AICc, supported the spatial resource variability hypothesis and included percentage of like adjacency (PLADJ) of all cover types (P  0.17 for males, solitary females, and females with dependent young), and body mass (kg; P = 0.66). Based on this model, for every percentage increase in PLADJ, UD area was predicted to increase 1.16 times for all sex and reproductive classes. Our results suggest that landscape heterogeneity influences brown bear space use; however, we found that bears used larger areas when landscape homogeneity increased, presumably to gain a diversity of food resources. Our results did not support the temporal resource variability hypothesis, suggesting that the spatial distribution of food was more important than seasonal availability in relation to brown bear home range size.

  4. Liquid praseodymium heat content by levitation calorimetry. [Sample size 0. 5 - 1. 5g; 1460 to 2289/sup 0/K

    Energy Technology Data Exchange (ETDEWEB)

    Stretz, L.A.; Bautista, R.G.

    1976-01-01

    The high-temperature heat content of liquid praseodymium was measured experimentally by the levitation calorimetry technique. The samples, ranging in size from 0.5 to 1.5 g, were simultaneously levitated and heated by a radiofrequency generator in an argon-helium mixture prior to being dropped into a conventional copper block drop calorimeter. Corrections were made for the convection and radiation losses during the fall of the sample from the levitation chamber into the calorimeter. The praseodymium data, from 1460 to 2289K, were fitted by the following equation where the indicated errors represent the average deviation of the experimental value from the value predicted by the equation: H/sub T/ - H/sub 298/./sub 15/ = (41.57 +- 0.29) (T - 1208) + (41733 +- 197) J/mol. (auth)

  5. Elemental mass size distribution of the Debrecen urban aerosol

    International Nuclear Information System (INIS)

    Kertesz, Zs.; Szoboszlai, Z.; Dobos, E.; Borbely-Kiss, I.

    2007-01-01

    Complete text of publication follows. Size distribution is one of the basic properties of atmospheric aerosol. It is closely related to the origin, chemical composition and age of the aerosol particles, and it influences the optical properties, environmental effects and health impact of aerosol. As part of the ongoing aerosol research in the Group of Ion Beam Applications of the Atomki, elemental mass size distribution of urban aerosol were determined using particle induced X-ray emission (PIXE) analytical technique. Aerosol sampling campaigns were carried out with 9-stage PIXE International cascade impactors, which separates the aerosol into 10 size fractions in the 0.05-30 ?m range. Five 48-hours long samplings were done in the garden of the Atomki, in April and in October, 2007. Both campaigns included weekend and working day samplings. Basically two different kinds of particles could be identified according to the size distribution. In the size distribution of Al, Si, Ca, Fe, Ba, Ti, Mn and Co one dominant peak can be found around the 3 m aerodynamic diameter size range, as it is shown on Figure 1. These are the elements of predominantly natural origin. Elements like S, Cl, K, Zn, Pb and Br appears with high frequency in the 0.25-0.5 mm size range as presented in Figure 2. These elements are originated mainly from anthropogenic sources. However sometimes in the size distribution of these elements a 2 nd , smaller peak appears at the 2-4 μm size ranges, indicating different sources. Differences were found between the size distribution of the spring and autumn samples. In the case of elements of soil origin the size distribution was shifted towards smaller diameters during October, and a 2 nd peak appeared around 0.5 μm. A possible explanation to this phenomenon can be the different meteorological conditions. No differences were found between the weekend and working days in the size distribution, however the concentration values were smaller during the weekend

  6. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  7. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  8. Blubber cortisol: a potential tool for assessing stress response in free-ranging dolphins without effects due to sampling.

    Directory of Open Access Journals (Sweden)

    Nicholas M Kellar

    Full Text Available When paired with dart biopsying, quantifying cortisol in blubber tissue may provide an index of relative stress levels (i.e., activation of the hypothalamus-pituitary-adrenal axis in free-ranging cetacean populations while minimizing the effects of the act of sampling. To validate this approach, cortisol was extracted from blubber samples collected from beach-stranded and bycaught short-beaked common dolphins using a modified blubber steroid isolation technique and measured via commercially available enzyme immunoassays. The measurements exhibited appropriate quality characteristics when analyzed via a bootstraped stepwise parallelism analysis (observed/expected = 1.03, 95%CI: 99.6 - 1.08 and showed no evidence of matrix interference with increasing sample size across typical biopsy tissue masses (75-150 mg; r(2 = 0.012, p = 0.78, slope = 0.022 ng(cortisol deviation/ul(tissue extract added. The relationships between blubber cortisol and eight potential cofactors namely, 1 fatality type (e.g., stranded or bycaught, 2 specimen condition (state of decomposition, 3 total body length, 4 sex, 5 sexual maturity state, 6 pregnancy status, 7 lactation state, and 8 adrenal mass, were assessed using a Bayesian generalized linear model averaging technique. Fatality type was the only factor correlated with blubber cortisol, and the magnitude of the effect size was substantial: beach-stranded individuals had on average 6.1-fold higher cortisol levels than those of bycaught individuals. Because of the difference in conditions surrounding these two fatality types, we interpret this relationship as evidence that blubber cortisol is indicative of stress response. We found no evidence of seasonal variation or a relationship between cortisol and the remaining cofactors.

  9. Characteristics of dimethylaminium and trimethylaminium in atmospheric particles ranging from supermicron to nanometer sizes over eutrophic marginal seas of China and oligotrophic open oceans.

    Science.gov (United States)

    Yu, Peiran; Hu, Qingjing; Li, Kai; Zhu, Yujiao; Liu, Xiaohuan; Gao, Huiwang; Yao, Xiaohong

    2016-12-01

    In this study, we characterized dimethylaminium (DMA + ) and trimethylaminium (TMA + ) in size-segregated atmospheric particles during three cruise campaigns in the marginal seas of China and one cruise campaign mainly in the northwest Pacific Ocean (NWPO). An 14-stage nano-MOUDI sampler was utilized for sampling atmospheric particles ranging from 18μm to 0.010μm. Among the four cruise campaigns, the highest concentrations of DMA + and TMA + in PM 10 were observed over the South Yellow Sea (SYS) in August 2015, i.e., 0.76±0.12nmolm -3 for DMA + (average value±standard deviation) and 0.93±0.13nmolm -3 for TMA + . The lowest values were observed over the NWPO in April 2015, i.e., 0.28±0.16nmolm -3 for DMA + and 0.22±0.12nmolm -3 for TMA + . In general, size distributions of the two ions exhibited a bi-modal pattern, i.e., one mode at 0.01-0.1μm and the other at 0.1-1.8μm. The two ions' mode at 0.01-0.1μm was firstly observed. The mode was largely enhanced in samples collected over the SYS in August 2015, leading to high mole ratios of (DMA + +TMA + )/NH 4 + in PM 0.1 (0.4±0.8, median value±standard deviation) and the ions' concentrations in PM 0.1 accounting for ~10% and ~40% of their corresponding concentrations in PM 10 . This implied that (DMA + +TMA + ) likely played an important role in neutralizing acidic species in the smaller particles. Using SO 4 2- , NO 3 - and NH 4 + as references, we confirm that the elevated concentrations of DMA + and TMA + in the 0.01-0.1μm size range were probably real signals rather than sampling artifacts. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  11. Radon exhalation rates from slate stone samples in Aravali Range in Haryana

    International Nuclear Information System (INIS)

    Upadhyay, S.B.; Kant, K.; Chakarvarti, S.K.

    2012-01-01

    The slate stone tiles are very popular in covering the walls of the rooms. Radon is released into ambient air from slate stones due to ubiquitous uranium and radium in them, thus increasing the airborne radon concentration. The radioactivity in slates stones is related to radioactivity in the rocks from which the slate stone tiles are formed. In the present investigation, the radon emanated from slate stone samples collected from different slate mines in Aravali range of hills in the Haryana state of Northern India has been estimated. For the measurement of radon concentration emanated from these samples, alpha-sensitive LR-115 type II plastic track detectors have been used. The alpha particles emitted from the radon form tracks in these detectors. After chemical etching the track density of registered tracks is used to calculate radon concentration and exhalation rates of radon using required formulae. The measurements indicate normal to some higher levels of radon concentration emanated from the slat stone samples collected from Aravali range of hills in north India. The results will be discussed in full paper. (author)

  12. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  13. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  14. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  15. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  16. Body size, growth and life span: implications for the polewards range shift of Octopus tetricus in south-eastern Australia.

    Science.gov (United States)

    Ramos, Jorge E; Pecl, Gretta T; Moltschaniwskyj, Natalie A; Strugnell, Jan M; León, Rafael I; Semmens, Jayson M

    2014-01-01

    Understanding the response of any species to climate change can be challenging. However, in short-lived species the faster turnover of generations may facilitate the examination of responses associated with longer-term environmental change. Octopus tetricus, a commercially important species, has undergone a recent polewards range shift in the coastal waters of south-eastern Australia, thought to be associated with the southerly extension of the warm East Australian Current. At the cooler temperatures of a polewards distribution limit, growth of a species could be slower, potentially leading to a bigger body size and resulting in a slower population turnover, affecting population viability at the extreme of the distribution. Growth rates, body size, and life span of O. tetricus were examined at the leading edge of a polewards range shift in Tasmanian waters (40°S and 147°E) throughout 2011. Octopus tetricus had a relatively small body size and short lifespan of approximately 11 months that, despite cooler temperatures, would allow a high rate of population turnover and may facilitate the population increase necessary for successful establishment in the new extended area of the range. Temperature, food availability and gender appear to influence growth rate. Individuals that hatched during cooler and more productive conditions, but grew during warming conditions, exhibited faster growth rates and reached smaller body sizes than individuals that hatched into warmer waters but grew during cooling conditions. This study suggests that fast growth, small body size and associated rapid population turnover may facilitate the range shift of O. tetricus into Tasmanian waters.

  17. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  18. Age- and size-related reference ranges: a case study of spirometry through childhood and adulthood.

    Science.gov (United States)

    Cole, T J; Stanojevic, S; Stocks, J; Coates, A L; Hankinson, J L; Wade, A M

    2009-02-28

    Age-related reference ranges are useful for assessing growth in children. The LMS method is a popular technique for constructing growth charts that model the age-changing distribution of the measurement in terms of the median, coefficient of variation and skewness. Here the methodology is extended to references that depend on body size as well as age, by exploiting the flexibility of the generalised additive models for location, scale and shape (GAMLSS) technique. GAMLSS offers general linear predictors for each moment parameter and a choice of error distributions, which can handle kurtosis as well as skewness. A key question with such references is the nature of the age-size adjustment, additive or multiplicative, which is explored by comparing the identity link and log link for the median predictor.There are several measurements whose reference ranges depend on both body size and age. As an example, models are developed here for the first four moments of the lung function variables forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC in terms of height and age, in a data set of 3598 children and adults aged 4 to 80 years. The results show a strong multiplicative association between spirometry, height and age, with a large and nonlinear age effect across the age range. Variability also depends nonlinearly on age and to a lesser extent on height. FEV(1) and FVC are close to normally distributed, while FEV(1)/FVC is appreciably skew to the left. GAMLSS is a powerful technique for the construction of such references, which should be useful in clinical medicine. Copyright (c) 2008 John Wiley & Sons, Ltd.

  19. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  20. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  1. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  2. Surface and finite size effect on fluctuations dynamics in nanoparticles with long-range order

    Science.gov (United States)

    Morozovska, A. N.; Eliseev, E. A.

    2010-02-01

    The influence of surface and finite size on the dynamics of the order parameter fluctuations and critical phenomena in the three-dimensional (3D)-confined systems with long-range order was not considered theoretically. In this paper, we study the influence of surface and finite size on the dynamics of the order parameter fluctuations in the particles of arbitrary shape. We consider concrete examples of the spherical and cylindrical ferroic nanoparticles within Landau-Ginzburg-Devonshire phenomenological approach. Allowing for the strong surface energy contribution in micro and nanoparticles, the analytical expressions derived for the Ornstein-Zernike correlator of the long-range order parameter spatial-temporal fluctuations, dynamic generalized susceptibility, relaxation times, and correlation radii discrete spectra are different from those known for bulk systems. Obtained analytical expressions for the correlation function of the order parameter spatial-temporal fluctuations in micro and nanosized systems can be useful for the quantitative analysis of the dynamical structural factors determined from magnetic resonance diffraction and scattering spectra. Besides the practical importance of the correlation function for the analysis of the experimental data, derived expressions for the fluctuations strength determine the fundamental limits of phenomenological theories applicability for 3D-confined systems.

  3. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  4. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  5. Home-range size and overlap within an introduced population of the Cuban Knight Anole, Anolis equestris (Squamata: Iguanidae

    Directory of Open Access Journals (Sweden)

    Paul M. Richards

    2011-07-01

    Full Text Available Many studies have investigated the spatial relationships of terrestrial lizards, but arboreal species remain poorly studiedbecause they are difficult to observe. The conventional view of home-range size and overlap among territorial, polygynous species of lizards is that: (1 male home ranges are larger than those of females; (2 male home ranges usually encompass, or substantiallyoverlap, those of several females; and (3 male home-range overlap varies but often is minimal, but female home ranges frequently overlap extensively. However, the paucity of pertinent studies makes it difficult to generalize these patterns to arboreal lizards. Weinvestigated home-range size and overlap in the arboreal Knight Anole, Anolis equestris, and compared our findings to published home-range data for 15 other species of Anolis. Using radiotelemetry and mark-recapture/resight techniques, we analyzed the home rangesof individuals from an introduced population of Knight Anoles in Miami, Florida. The home ranges of both sexes substantially overlapped those of the same- and different-sex individuals. In addition, male and female home ranges did not differ significantly, an unusual observation among lizard species. If one compares both male and female home ranges to those of other Anolis species, Knight Anoles have significantly larger home ranges, except for two species for which statistical comparisons were not possible. Our results suggest that home ranges and sex-specific spatial arrangements of canopy lizards may differ from those of more terrestrial species.

  6. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  7. Cooling rate and size effects on the medium-range structure of multicomponent oxide glasses simulated by molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tilocca, Antonio [Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ (United Kingdom)

    2013-09-21

    A set of molecular dynamics simulations were performed to investigate the effect of cooling rate and system size on the medium-range structure of melt-derived multicomponent silicate glasses, represented by the quaternary 45S5 Bioglass composition. Given the significant impact of the glass degradation on applications of these materials in biomedicine and nuclear waste disposal, bulk structural features which directly affect the glass dissolution process are of particular interest. Connectivity of the silicate matrix, ion clustering and nanosegregation, distribution of ring and chain structural patterns represent critical features in this context, which can be directly extracted from the models. A key issue is represented by the effect of the computational approach on the corresponding glass models, especially in light of recent indications questioning the suitability of conventional MD approaches (that is, involving melt-and-quench of systems containing ∼10{sup 3} atoms at cooling rates of 5-10 K/ps) when applied to model these glasses. The analysis presented here compares MD models obtained with conventional and nonconventional cooling rates and system sizes, highlighting the trend and range of convergence of specific structural features in the medium range. The present results show that time-consuming computational approaches involving much lower cooling rates and/or significantly larger system sizes are in most cases not necessary in order to obtain a reliable description of the medium-range structure of multicomponent glasses. We identify the convergence range for specific properties and use them to discuss models of several glass compositions for which a possible influence of cooling-rate or size effects had been previously hypothesized. The trends highlighted here represent an important reference to obtain reliable models of multicomponent glasses and extract converged medium-range structural features which affect the glass degradation and thus their

  8. Cooling rate and size effects on the medium-range structure of multicomponent oxide glasses simulated by molecular dynamics

    International Nuclear Information System (INIS)

    Tilocca, Antonio

    2013-01-01

    A set of molecular dynamics simulations were performed to investigate the effect of cooling rate and system size on the medium-range structure of melt-derived multicomponent silicate glasses, represented by the quaternary 45S5 Bioglass composition. Given the significant impact of the glass degradation on applications of these materials in biomedicine and nuclear waste disposal, bulk structural features which directly affect the glass dissolution process are of particular interest. Connectivity of the silicate matrix, ion clustering and nanosegregation, distribution of ring and chain structural patterns represent critical features in this context, which can be directly extracted from the models. A key issue is represented by the effect of the computational approach on the corresponding glass models, especially in light of recent indications questioning the suitability of conventional MD approaches (that is, involving melt-and-quench of systems containing ∼10 3 atoms at cooling rates of 5-10 K/ps) when applied to model these glasses. The analysis presented here compares MD models obtained with conventional and nonconventional cooling rates and system sizes, highlighting the trend and range of convergence of specific structural features in the medium range. The present results show that time-consuming computational approaches involving much lower cooling rates and/or significantly larger system sizes are in most cases not necessary in order to obtain a reliable description of the medium-range structure of multicomponent glasses. We identify the convergence range for specific properties and use them to discuss models of several glass compositions for which a possible influence of cooling-rate or size effects had been previously hypothesized. The trends highlighted here represent an important reference to obtain reliable models of multicomponent glasses and extract converged medium-range structural features which affect the glass degradation and thus their application

  9. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  10. How much motion is too much motion? Determining motion thresholds by sample size for reproducibility in developmental resting-state MRI

    Directory of Open Access Journals (Sweden)

    Julia Leonard

    2017-03-01

    Full Text Available A constant problem developmental neuroimagers face is in-scanner head motion. Children move more than adults and this has led to concerns that developmental changes in resting-state connectivity measures may be artefactual. Furthermore, children are challenging to recruit into studies and therefore researchers have tended to take a permissive stance when setting exclusion criteria on head motion. The literature is not clear regarding our central question: How much motion is too much? Here, we systematically examine the effects of multiple motion exclusion criteria at different sample sizes and age ranges in a large openly available developmental cohort (ABIDE; http://preprocessed-connectomes-project.org/abide. We checked 1 the reliability of resting-state functional magnetic resonance imaging (rs-fMRI pairwise connectivity measures across the brain and 2 the accuracy with which we can separate participants with autism spectrum disorder from typically developing controls based on their rs-fMRI scans using machine learning. We find that reliability on average is primarily sensitive to the number of participants considered, but that increasingly permissive motion thresholds lower case-control prediction accuracy for all sample sizes.

  11. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  12. Effective sampling range of food-based attractants for female Anastrepha suspensa (Diptera: Tephritidae).

    Science.gov (United States)

    Kendra, Paul E; Epsky, Nancy D; Heath, Robert R

    2010-04-01

    Release-recapture studies were conducted with both feral and sterile females of the Caribbean fruit fly, Anastrepha suspensa (Loew) (Diptera: Tephritidae), to determine sampling range for a liquid protein bait (torula yeast/borax) and for a two-component synthetic lure (ammonium acetate and putrescine). Tests were done in a guava, Psidium guajava L., grove and involved releasing flies at a central point and recording the numbers captured after 7 h and 1, 2, 3, and 6 d in an array of 25 Multilure traps located 9-46 m from the release point. In all tests, highest rate of recapture occurred within the first day of release, so estimations of sampling range were based on a 24-h period. Trap distances were grouped into four categories (30 m from release point) and relative trapping efficiency (percentage of capture) was determined for each distance group. Effective sampling range was defined as the maximum distance at which relative trapping efficiency was > or = 25%. This corresponded to the area in which 90% of the recaptures occured. Contour analysis was also performed to document spatial distribution of fly dispersal. In tests with sterile flies, immature females dispersed farther and were recovered in higher numbers than mature females, regardless of attractant, and recapture of both cohorts was higher with torula yeast. For mature feral flies, range of the synthetic lure was determined to be 30 m. With sterile females, effective range of both attractants was 20 m. Contour maps indicated that wind direction had a strong influence on the active space of attractants, as reflected by distribution of captured flies.

  13. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  14. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

  16. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  17. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  19. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  20. Movement Patterns, Home Range Size and Habitat Selection of an Endangered Resource Tracking Species, the Black-Throated Finch (Poephila cincta cincta).

    Science.gov (United States)

    Rechetelo, Juliana; Grice, Anthony; Reside, April Elizabeth; Hardesty, Britta Denise; Moloney, James

    2016-01-01

    Understanding movement patterns and home range of species is paramount in ecology; it is particularly important for threatened taxa as it can provide valuable information for conservation management. To address this knowledge gap for a range-restricted endangered bird, we estimated home range size, daily movement patterns and habitat use of a granivorous subspecies in northeast Australia, the black-throated finch (Poephila cincta cincta; BTF) using radio-tracking and re-sighting of colour banded birds. Little is known about basic aspects of its ecology including movement patterns and home range sizes. From 2011-2014 we colour-banded 102 BTF and radio-tracked 15 birds. We generated home ranges (calculated using kernel and Minimum Convex Polygons techniques of the 15 tracked BTF). More than 50% of the re-sightings occurred within 200 m of the banding site (n = 51 out of 93 events) and within 100 days of capture. Mean home-range estimates with kernel (50%, 95% probability) and Minimum Convex Polygons were 10.59 ha, 50.79 ha and 46.27 ha, respectively. Home range size differed between two capture sites but no seasonal differences were observed. BTF home ranges overlapped four habitat types among eight available. Habitat selection was different from random at Site 1 (χ2 = 373.41, df = 42, pmovements may be related to resource bottleneck periods. Daily movement patterns differed between sites, which is likely linked to the fact that the sites differ in the spatial distribution of resources. The work provides information about home range sizes and local movement of BTF that will be valuable for targeting effective management and conservation strategies for this endangered granivore.

  1. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  2. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  3. Use of methods for specifying the target difference in randomised controlled trial sample size calculations: Two surveys of trialists' practice.

    Science.gov (United States)

    Cook, Jonathan A; Hislop, Jennifer M; Altman, Doug G; Briggs, Andrew H; Fayers, Peter M; Norrie, John D; Ramsay, Craig R; Harvey, Ian M; Vale, Luke D

    2014-06-01

    the most recent trial, the target difference was usually one viewed as important by a stakeholder group, mostly also viewed as a realistic difference given the interventions under evaluation, and sometimes one that led to an achievable sample size. The response rates achieved were relatively low despite the surveys being short, well presented, and having utilised reminders. Substantial variations in practice exist with awareness, use, and willingness to recommend methods varying substantially. The findings support the view that sample size calculation is a more complex process than would appear to be the case from trial reports and protocols. Guidance on approaches for sample size estimation may increase both awareness and use of appropriate formal methods. © The Author(s), 2014.

  4. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  5. ULTRASONOGRAPHIC ASSESSMENT OF NECK MUSCULAR SIZE AND RANGE OF MOTION IN RUGBY PLAYERS.

    Science.gov (United States)

    Hemelryck, Walter; Calistri, Josselin; Papadopoulou, Virginie; Theunissen, Sigrid; Dugardeyn, Christian; Balestra, Costantino

    2018-02-01

    World Rugby Union laws are constantly evolving towards stringent injury-prevention, particularly for contested scrums, since front row players are most at risk of cervical spine injuries. Recently, some countries have also introduced tailored training programs and minimum performance requirements for playing in the front row. Nevertheless, these approaches lack an objective assessment of each cervical muscle that would provide protective support. Since front row players are the most at risk for cervical spine injuries due to the specific type of contact during scrums, the purpose of this study was to ascertain whether significant differences exist in neck muscle size and range of motion between front row players and players of other positions, across playing categories. Cross-sectional controlled laboratory study. 129 sub-elite male subjects from various first-team squads of Belgian Rugby clubs were recruited. Subjects were grouped according to age: Junior (J)  35 years old; as well as playing position: Front row players (J = 10, S = 12, V = 11 subjects), (Rest of the) pack (J = 12, S = 12, V = 10), backs (J = 10, S = 11, V = 11). An age-matched control group of non-rugby players was also recruited (J = 10, S = 10, V = 10).For each subject, the total neck circumference (NC) and the cervical range of motion (CROM) were measured. In addition, the thickness of the trapezius (T), splenius capitis (SCa), semispinalis capitis (SCb), semispinalis cervicis (SPC), sternocleidomastoid muscles (SCOM), and the total thickness of all four structures (TT), were measured using ultrasonography. In each age category, compared to controls, rugby players were found to have decreased CROM, an increase in neck circumference (NC), and increased total thickness (TT), trapezius (T), semispinalis capitis (SCb) and sternocleidomastoid muscles (SCOM) sizes. For junior players, the thickness of the semispinalis cervicis (SPC) was also increased compared to controls. The CROM was decreased

  6. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  7. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  8. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  10. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    Science.gov (United States)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  11. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  12. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  13. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  14. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  16. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  17. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  18. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  19. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    Science.gov (United States)

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  20. A comparison of geochemical exploration techniques and sample media within accretionary continental margins: an example from the Pacific Border Ranges, Southern Alaska, U.S.A.

    Science.gov (United States)

    Sutley, S.J.; Goldfarb, R.J.; O'Leary, R. M.; Tripp, R.B.

    1990-01-01

    The Pacific Border Ranges of the southern Alaskan Cordillera are composed of a number of allochthonous tectonostratigraphic terranes. Within these terranes are widespread volcanogenic, massive sulfide deposits in and adjacent to portions of accreted ophiolite complexes, bands and disseminations of chromite in accreted island-arc ultramafic rocks, and epigenetic, gold-bearing quartz veins in metamorphosed turbidite sequences. A geochemical pilot study was undertaken to determine the most efficient exploration strategy for locating these types of mineral deposits within the Pacific Border Ranges and other typical convergent continental margin environments. High-density sediment sampling was carried out in first- and second-order stream channels surrounding typical gold, chromite and massive sulfide occurrences. At each site, a stream-sediment and a panned-concentrate sample were collected. In the laboratory, the stream sediments were sieved into coarse-sand, fine- to medium-sand, and silt- to clay-size fractions prior to analysis. One split of the panned concentrates was retained for analysis; a second split was further concentrated by gravity separation in heavy liquids and then divided into magnetic, weakly magnetic and nonmagnetic fractions for analysis. A number of different techniques including atomic absorption spectrometry, inductively coupled plasma atomic emission spectrometry and semi-quantitative emission spectrography were used to analyze the various sample media. Comparison of the various types of sample media shows that in this tectonic environment it is most efficient to include a silt- to clay-size sediment fraction and a panned-concentrate sample. Even with the relatively low detection limits for many elements by plasma spectrometry and atomic absorption spectrometry, anomalies reflecting the presence of gold veins could not be identified in any of the stream-sediment fractions. Unseparated panned-concentrate samples should be analyzed by emission

  1. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  2. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  3. Size-selective separation of polydisperse gold nanoparticles in supercritical ethane.

    Science.gov (United States)

    Williams, Dylan P; Satherley, John

    2009-04-09

    The aim of this study was to use supercritical ethane to selectively disperse alkanethiol-stabilized gold nanoparticles of one size from a polydisperse sample in order to recover a monodisperse fraction of the nanoparticles. A disperse sample of metal nanoparticles with diameters in the range of 1-5 nm was prepared using established techniques then further purified by Soxhlet extraction. The purified sample was subjected to supercritical ethane at a temperature of 318 K in the pressure range 50-276 bar. Particles were characterized by UV-vis absorption spectroscopy, TEM, and MALDI-TOF mass spectroscopy. The results show that with increasing pressure the dispersibility of the nanoparticles increases, this effect is most pronounced for smaller nanoparticles. At the highest pressure investigated a sample of the particles was effectively stripped of all the smaller particles leaving a monodisperse sample. The relationship between dispersibility and supercritical fluid density for two different size samples of alkanethiol-stabilized gold nanoparticles was considered using the Chrastil chemical equilibrium model.

  4. Metamemory and memory for a wide range of font sizes: What is the contribution of perceptual fluency?

    Science.gov (United States)

    Undorf, Monika; Zimdahl, Malte F

    2018-04-26

    Words printed in a larger 48-point font are judged to be more memorable than words printed in a smaller 18-point font, although font size does not affect actual memory. To clarify the basis of this font size effect on metamemory and memory, 4 experiments investigated how presenting words in 48 (Experiment 1) or 4 (Experiments 2 to 4) font sizes between 6 point and 500 point affected judgments of learning (JOLs) and recall performance. Response times in lexical decision tasks were used to measure perceptual fluency. In all experiments, perceptual fluency was lower for words presented in very small and very large font sizes than for words presented in intermediate font sizes. In contrast, JOLs increased monotonically with font size, even beyond the point where a large font impaired perceptual fluency. Assessments of people's metacognitive beliefs about font size revealed that the monotonic increase in JOLs was not due to beliefs masking perceptual fluency effects (Experiment 3). Also, JOLs still increased across the whole range of font sizes when perceptual fluency was made salient at study (Experiment 4). In all experiments but Experiment 4, recall performance increased with increasing font size, although to a lesser extent than JOLs. Overall, the current study supports the idea that metacognitive beliefs underlie font size effects in metamemory. As important, it reveals that people's font size beliefs have some accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  6. Long-ranged interactions in thin TiN films at the superconductor-insulator transition?

    Energy Technology Data Exchange (ETDEWEB)

    Kronfeldner, Klaus; Strunk, Christoph [Institute for Experimental and Applied Physics, University of Regensburg (Germany); Baturina, Tatyana [A.V. Rzhanov Institute of Semiconductor Physics SB RAS, Novosibirsk (Russian Federation)

    2015-07-01

    We measured IV-characteristics and magnetoresistance of square TiN-films in the vicinity of the disorder-tuned superconductor-insulator transition (SIT) for different sizes (5 μm to 240 μm). While the films are superconducting at zero magnetic field, at finite fields a SIT occurs. The resistance shows thermally activated behaviour on both sides of the SIT. Deep in the superconducting regime the activation energy grows linear with the sample size as expected for a size-independent critical current density. Closer to the SIT the activation energy becomes clearly size independent. On the insulating side the magnetoresistance maximum and the activation energy both grow logarithmically with sample size which is consistent with a size-limited charge BKT (Berezinskii-Kosterlitz-Thouless) scenario. In order to test for the presence of long-ranged interactions in our films, we investigate the influence of a topgate. It is expected to screen the possible long-ranged interactions as the distance of the film to the gate is much shorter than the electrostatic screening length deduced from the size-dependent activation energy.

  7. Synthesis and magnetic properties of size-selected CoPt nanoparticles

    International Nuclear Information System (INIS)

    Tournus, F.; Blanc, N.; Tamion, A.; Hillenkamp, M.; Dupuis, V.

    2011-01-01

    CoPt nanoparticles are widely studied, in particular for their potentially very high magnetic anisotropy. However, their magnetic properties can differ from the bulk ones and they are expected to vary with the particle size. In this paper, we report the synthesis and characterization of well-defined CoPt nanoparticle samples produced in ultrahigh vacuum conditions following a physical route: the mass-selected low energy cluster beam deposition technique. This approach relies on an electrostatic deviation of ionized clusters which allows us to easily adjust the particle size, independently from the deposited equivalent thickness (i.e. the surface or volume particle density in a sample). Diluted samples made of CoPt particles, with different diameters, embedded in amorphous carbon are studied by transmission electron microscopy and superconducting interference device magnetometry, which gives access to the magnetic anisotropy energy distribution. We then compare the magnetic properties of two different particle sizes. The results are found to be consistent with an anisotropy constant (including its distribution) which does not evolve with the particle size in the range considered. - Highlights: → Samples of mass-selected CoPt nanoparticles are synthesized by an original physical method. → The magnetic properties of two different particle sizes are compared. → The anisotropy constant (including its dispersion) does not evolve in the range considered. → These results illustrate some invariance properties of ZFC curves.

  8. In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size

    Directory of Open Access Journals (Sweden)

    Stefano Schiavon

    2010-01-01

    Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.

  9. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  10. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  11. Structural changes and out-of-sample prediction of realized range-based variance in the stock market

    Science.gov (United States)

    Gong, Xu; Lin, Boqiang

    2018-03-01

    This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.

  12. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  13. Dependence of size and size distribution on reactivity of aluminum nanoparticles in reactions with oxygen and MoO3

    International Nuclear Information System (INIS)

    Sun, Juan; Pantoya, Michelle L.; Simon, Sindee L.

    2006-01-01

    The oxidation reaction of aluminum nanoparticles with oxygen gas and the thermal behavior of a metastable intermolecular composite (MIC) composed of the aluminum nanoparticles and molybdenum trioxide are studied with differential scanning calorimetry (DSC) as a function of the size and size distribution of the aluminum particles. Both broad and narrow size distributions have been investigated with aluminum particle sizes ranging from 30 to 160 nm; comparisons are also made to the behavior of micrometer-size particles. Several parameters have been used to characterize the reactivity of aluminum nanoparticles, including the fraction of aluminum that reacts prior to aluminum melting, heat of reaction, onset and peak temperatures, and maximum reaction rates. The results indicate that the reactivity of aluminum nanoparticles is significantly higher than that of the micrometer-size samples, but depending on the measure of reactivity, it may also depend strongly on the size distribution. The isoconversional method was used to calculate the apparent activation energy, and the values obtained for both the Al/O 2 and Al/MoO 3 reaction are in the range of 200-300 kJ/mol

  14. Thermal conductivity of graphene mediated by strain and size

    International Nuclear Information System (INIS)

    Kuang, Youdi; Shi, Sanqiang; Wang, Xinjiang

    2016-01-01

    Based on first-principles calculations and full iterative solution of the linearized Boltzmann–Peierls transport equation for phonons, we systematically investigate effects of strain, size and temperature on the thermal conductivity k of suspended graphene. The calculated size-dependent and temperature-dependent k for finite samples agree well with experimental data. The results show that, contrast to the convergent room-temperature k = 5450 W/m-K of unstrained graphene at a sample size ~8 cm, k of strained graphene diverges with increasing the sample size even at high temperature. Out-of-plane acoustic phonons are responsible for the significant size effect in unstrained and strained graphene due to their ultralong mean free path and acoustic phonons with wavelength smaller than 10 nm contribute 80% to the intrinsic room temperature k of unstrained graphene. Tensile strain hardens the flexural modes and increases their lifetimes, causing interesting dependence of k on sample size and strain due to the competition between boundary scattering and intrinsic phonon–phonon scattering. k of graphene can be tuned within a large range by strain for the size larger than 500 μm. These findings shed light on the nature of thermal transport in two-dimensional materials and may guide predicting and engineering k of graphene by varying strain and size

  15. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Science.gov (United States)

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately

  16. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  17. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  18. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    Science.gov (United States)

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit

  19. Unambiguous range-Doppler LADAR processing using 2 giga-sample-per-second noise waveforms

    International Nuclear Information System (INIS)

    Cole, Z.; Roos, P.A.; Berg, T.; Kaylor, B.; Merkel, K.D.; Babbitt, W.R.; Reibel, R.R.

    2007-01-01

    We demonstrate sub-nanosecond range and unambiguous sub-50-Hz Doppler resolved laser radar (LADAR) measurements using spectral holographic processing in rare-earth ion doped crystals. The demonstration utilizes pseudo-random-noise 2 giga-sample-per-second baseband waveforms modulated onto an optical carrier

  20. Unambiguous range-Doppler LADAR processing using 2 giga-sample-per-second noise waveforms

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Z. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States)]. E-mail: cole@s2corporation.com; Roos, P.A. [Spectrum Lab, Montana State University, P.O. Box 173510, Bozeman, MT 59717 (United States); Berg, T. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Kaylor, B. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Merkel, K.D. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Babbitt, W.R. [Spectrum Lab, Montana State University, P.O. Box 173510, Bozeman, MT 59717 (United States); Reibel, R.R. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States)

    2007-11-15

    We demonstrate sub-nanosecond range and unambiguous sub-50-Hz Doppler resolved laser radar (LADAR) measurements using spectral holographic processing in rare-earth ion doped crystals. The demonstration utilizes pseudo-random-noise 2 giga-sample-per-second baseband waveforms modulated onto an optical carrier.

  1. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  2. Spacecraft Trajectory Estimation Using a Sampled-Data Extended Kalman Filter with Range-Only Measurements

    National Research Council Canada - National Science Library

    Erwin, R. S; Bernstein, Dennis S

    2005-01-01

    .... In this paper we use a sampled-data extended Kalman Filter to estimate the trajectory or a target satellite when only range measurements are available from a constellation or orbiting spacecraft...

  3. Measurement of inclusion size by laser ablation ICP mass spectrometry

    International Nuclear Information System (INIS)

    Karasev, Andrey V.; Suito, Hideaki

    2004-01-01

    By using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), the measurement of particle size has been made for one component oxide (Al 2 O 3 and MgO) and multicomponent oxide (12CaO·7Al 2 O 3 and CaO-Al 2 O 3 -MgO) located on surface of iron or glass sample. The method of particle size estimation by LA-ICP-MS has been developed coupled with a new method of making samples with particles. The size calibration lines for Al 2 O 3 , MgO and CaO particles have been obtained. The results of particle size measurement by LA-ICP-MS are compared with those by SEM and single-particle optical sensing (SPOS) methods. It was confirmed that LA-ICP-MS has the perspective to be used for the quick measurement of inclusion composition and size in metal and other materials. The size frequency distributions of Al 2 O 3 particles measured by LA-ICP-MS in iron samples with particles agree reasonably well with those by SEM and SPOS in the range of particle diameter from 2 to 20 μm. The size of Al 2 O 3 , MgO and complex oxide (12CaO·7Al 2 O 3 and CaO-Al 2 O 3 -MgO) particles measured by LA-ICP-MS is in good agreement with that by SEM in the range of particle diameter from 10 to 40 μm. (author)

  4. Study of uranium mineralization in rock samples from marwat range bannu basin by fission track analysis technique

    International Nuclear Information System (INIS)

    Qureshi, A.Z.; Ullah, K.; Ullah, N.; Akram, M.

    2004-07-01

    The Geophysics Division, Atomic Energy Minerals Centre (AEMC), Lahore has planned a uranium exploration program in Marwat Range, Bannu Basin. In this connection 30 thin sections of rock samples, collected from four areas; namely, Darra Tang, Simukili, Karkanwal and Sheikhillah from Marwat Range, and one from Salt Range were provided to Nuclear Geology Group of Physics Research Division, PINSTECH for the study of nature and mechanism of uranium mineralization These studies are aimed to help in designing uranium exploration strategy by providing the loci of uranium sources in the Marwat and Salt Ranges. The samples have been studied using fission track analysis technique. (author)

  5. Towards traceable size determination of extracellular vesicles

    Directory of Open Access Journals (Sweden)

    Zoltán Varga

    2014-02-01

    Full Text Available Background: Extracellular vesicles (EVs have clinical importance due to their roles in a wide range of biological processes. The detection and characterization of EVs are challenging because of their small size, low refractive index, and heterogeneity. Methods: In this manuscript, the size distribution of an erythrocyte-derived EV sample is determined using state-of-the-art techniques such as nanoparticle tracking analysis, resistive pulse sensing, and electron microscopy, and novel techniques in the field, such as small-angle X-ray scattering (SAXS and size exclusion chromatography coupled with dynamic light scattering detection. Results: The mode values of the size distributions of the studied erythrocyte EVs reported by the different methods show only small deviations around 130 nm, but there are differences in the widths of the size distributions. Conclusion: SAXS is a promising technique with respect to traceability, as this technique was already applied for traceable size determination of solid nanoparticles in suspension. To reach the traceable measurement of EVs, monodisperse and highly concentrated samples are required.

  6. Changes in home range sizes and population densities of carnivore species along the natural to urban habitat gradient

    Czech Academy of Sciences Publication Activity Database

    Šálek, Martin; Drahníková, L.; Tkadlec, Emil

    2015-01-01

    Roč. 45, č. 1 (2015), s. 1-14 ISSN 0305-1838 Institutional support: RVO:68081766 Keywords : Carnivores * home range size * natural–urban gradient * population density * review Subject RIV: EG - Zoology Impact factor: 4.116, year: 2015

  7. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  8. Home ranges of lions in the Kalahari, Botswana exhibit vast sizes and high temporal variability.

    Science.gov (United States)

    Zehnder, André; Henley, Stephen; Weibel, Robert

    2018-06-01

    The central Kalahari region in Botswana is one of the few remaining ecosystems with a stable lion population. Yet, relatively little is known about the ecology of the lions there. As an entry point, home range estimations provide information about the space utilization of the studied animals. The home ranges of eight lions in this region were determined to investigate their spatial overlaps and spatiotemporal variations. We found that, except for MCP, all home range estimators yielded comparable results regarding size and shape. The home ranges of all individuals were located predominantly inside the protected reserves. Their areas were among the largest known for lions with 1131 - 4314km 2 (95%), with no significant differences between males and females. Numerous overlaps between lions of different sexes were detected, although these originate from different groups. A distance chart confirmed that most of these lions directly encountered each other once or several times. Strong temporal variations of the home ranges were observed that did not match a seasonal pattern. The exceptionally large home ranges are likely to be caused by the sparse and dynamic prey populations. Since the ungulates in the study area move in an opportunistic way, too, strong spatiotemporal home range variations emerge. This can lead to misleading home ranges. We therefore recommend clarifying the stability of the home ranges by applying several levels of temporal aggregation. The lack of strict territoriality is likely an adaptation to the variable prey base and the high energetic costs associated with defending a large area. Copyright © 2018 Elsevier GmbH. All rights reserved.

  9. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  10. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  11. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  12. Phylogenetic heritability of geographic range size in haematophagous ectoparasites: time of divergence and variation among continents.

    Science.gov (United States)

    Krasnov, Boris R; Shenbrot, Georgy I; van der Mescht, Luther; Warburton, Elizabeth M; Khokhlova, Irina S

    2018-04-12

    To understand existence, patterns and mechanisms behind phylogenetic heritability in the geographic range size (GRS) of parasites, we measured phylogenetic signal (PS) in the sizes of both regional (within a region) and continental (within a continent) geographic ranges of fleas in five regions. We asked whether (a) GRS is phylogenetically heritable and (b) the manifestation of PS varies between regions. We also asked whether geographic variation in PS reflects the effects of the environment's spatiotemporal stability (e.g. glaciation disrupting geographic ranges) or is associated with time since divergence (accumulation differences among species over time). Support for the former hypothesis would be indicated by stronger PS in southern than in northern regions, whereas support for the latter hypothesis would be shown by stronger PS in regions with a large proportion of species belonging to the derived lineages than in regions with a large proportion of species belonging to the basal lineages. We detected significant PS in both regional and continental GRSs of fleas from Canada and in continental GRS of fleas from Mongolia. No PS was found in the GRS of fleas from Australia and Southern Africa. Venezuelan fleas demonstrated significant PS in regional GRS only. Local Indicators of Phylogenetic Association detected significant local positive autocorrelations of GRS in some clades even in regions in which PS has not been detected across the entire phylogeny. This was mainly characteristic of younger taxa.

  13. Small-sized reverberation chamber for the measurement of sound absorption

    International Nuclear Information System (INIS)

    Rey, R. del; Alba, J.; Bertó, L.; Gregori, A.

    2017-01-01

    This paper presents the design, construction, calibration and automation of a reverberation chamber for small samples. A balance has been sought between reducing sample size, to reduce the manufacturing costs of materials, and finding the appropriate volume of the chamber, to obtain reliable values at high and mid frequencies. The small-sized reverberation chamber, that was built, has a volume of 1.12 m3 and allows for the testing of samples of 0.3 m2. By using diffusers, to improve the diffusion degree, and automating measurements, we were able to improve the reliability of the results, thus reducing test errors. Several comparison studies of the measurements of the small-sized reverberation chamber and the standardised reverberation chamber are shown, and a good degree of adjustment can be seen between them, within the range of valid frequencies. This paper presents a small laboratory for comparing samples and making decisions before the manufacturing of larger sizes. [es

  14. Body size and geographic range do not explain long term variation in fish populations: a Bayesian phylogenetic approach to testing assembly processes in stream fish assemblages.

    Directory of Open Access Journals (Sweden)

    Stephen J Jacquemin

    Full Text Available We combine evolutionary biology and community ecology to test whether two species traits, body size and geographic range, explain long term variation in local scale freshwater stream fish assemblages. Body size and geographic range are expected to influence several aspects of fish ecology, via relationships with niche breadth, dispersal, and abundance. These traits are expected to scale inversely with niche breadth or current abundance, and to scale directly with dispersal potential. However, their utility to explain long term temporal patterns in local scale abundance is not known. Comparative methods employing an existing molecular phylogeny were used to incorporate evolutionary relatedness in a test for covariation of body size and geographic range with long term (1983 - 2010 local scale population variation of fishes in West Fork White River (Indiana, USA. The Bayesian model incorporating phylogenetic uncertainty and correlated predictors indicated that neither body size nor geographic range explained significant variation in population fluctuations over a 28 year period. Phylogenetic signal data indicated that body size and geographic range were less similar among taxa than expected if trait evolution followed a purely random walk. We interpret this as evidence that local scale population variation may be influenced less by species-level traits such as body size or geographic range, and instead may be influenced more strongly by a taxon's local scale habitat and biotic assemblages.

  15. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  16. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  17. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  18. Sauropod dinosaurs evolved moderately sized genomes unrelated to body size.

    Science.gov (United States)

    Organ, Chris L; Brusatte, Stephen L; Stein, Koen

    2009-12-22

    Sauropodomorph dinosaurs include the largest land animals to have ever lived, some reaching up to 10 times the mass of an African elephant. Despite their status defining the upper range for body size in land animals, it remains unknown whether sauropodomorphs evolved larger-sized genomes than non-avian theropods, their sister taxon, or whether a relationship exists between genome size and body size in dinosaurs, two questions critical for understanding broad patterns of genome evolution in dinosaurs. Here we report inferences of genome size for 10 sauropodomorph taxa. The estimates are derived from a Bayesian phylogenetic generalized least squares approach that generates posterior distributions of regression models relating genome size to osteocyte lacunae volume in extant tetrapods. We estimate that the average genome size of sauropodomorphs was 2.02 pg (range of species means: 1.77-2.21 pg), a value in the upper range of extant birds (mean = 1.42 pg, range: 0.97-2.16 pg) and near the average for extant non-avian reptiles (mean = 2.24 pg, range: 1.05-5.44 pg). The results suggest that the variation in size and architecture of genomes in extinct dinosaurs was lower than the variation found in mammals. A substantial difference in genome size separates the two major clades within dinosaurs, Ornithischia (large genomes) and Saurischia (moderate to small genomes). We find no relationship between body size and estimated genome size in extinct dinosaurs, which suggests that neutral forces did not dominate the evolution of genome size in this group.

  19. Distribution Of Natural Radioactivity On Soil Size Particles

    International Nuclear Information System (INIS)

    Tran Van Luyen; Trinh Hoai Vinh; Thai Khac Dinh

    2008-01-01

    This report presents a distribution of natural radioactivity on different soil size particles, taken from one soil profile. On the results shows a range from 52% to 66% of natural radioisotopes such as 238 U, 232 Th, 226 Ra and 40 K concentrated on the soil particles below 40 micrometers in diameter size. The remained of natural radioisotopes were distributed on a soil particles with higher diameter size. The study is available for soil sample collected to natural radioactive analyze by gamma and alpha spectrometer methods. (author)

  20. Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions

    International Nuclear Information System (INIS)

    John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.

    2000-01-01

    Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was

  1. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  2. Subpixel Snow-covered Area Including Differentiated Grain Size from AVIRIS Data Over the Sierra Nevada Mountain Range

    Science.gov (United States)

    Hill, R.; Calvin, W. M.; Harpold, A. A.

    2016-12-01

    Mountain snow storage is the dominant source of water for humans and ecosystems in western North America. Consequently, the spatial distribution of snow-covered area is fundamental to both hydrological, ecological, and climate models. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were collected along the entire Sierra Nevada mountain range extending from north of Lake Tahoe to south of Mt. Whitney during the 2015 and 2016 snow-covered season. The AVIRIS dataset used in this experiment consists of 224 contiguous spectral channels with wavelengths ranging 400-2500 nanometers at a 15-meter spatial pixel size. Data from the Sierras were acquired on four days: 2/24/15 during a very low snow year, 3/24/16 near maximum snow accumulation, and 5/12/16 and 5/18/16 during snow ablation and snow loss. Previous retrieval of subpixel snow-covered area in alpine regions used multiple snow endmembers due to the sensitivity of snow spectral reflectance to grain size. We will present a model that analyzes multiple endmembers of varying snow grain size, vegetation, rock, and soil in segmented regions along the Sierra Nevada to determine snow-cover spatial extent, snow sub-pixel fraction and approximate grain size or melt state. The root mean squared error will provide a spectrum-wide assessment of the mixture model's goodness-of-fit. Analysis will compare snow-covered area and snow-cover depletion in the 2016 year, and annual variation from the 2015 year. Field data were also acquired on three days concurrent with the 2016 flights in the Sagehen Experimental Forest and will support ground validation of the airborne data set.

  3. Experimental determination of the steady-state charging probabilities and particle size conservation in non-radioactive and radioactive bipolar aerosol chargers in the size range of 5–40 nm

    Energy Technology Data Exchange (ETDEWEB)

    Kallinger, Peter, E-mail: peter.kallinger@univie.ac.at; Szymanski, Wladyslaw W. [University of Vienna, Faculty of Physics (Austria)

    2015-04-15

    Three bipolar aerosol chargers, an AC-corona (Electrical Ionizer 1090, MSP Corp.), a soft X-ray (Advanced Aerosol Neutralizer 3087, TSI Inc.), and an α-radiation-based {sup 241}Am charger (tapcon & analysesysteme), were investigated on their charging performance of airborne nanoparticles. The charging probabilities for negatively and positively charged particles and the particle size conservation were measured in the diameter range of 5–40 nm using sucrose nanoparticles. Chargers were operated under various flow conditions in the range of 0.6–5.0 liters per minute. For particular experimental conditions, some deviations from the chosen theoretical model were found for all chargers. For very small particle sizes, the AC-corona charger showed particle losses at low flow rates and did not reach steady-state charge equilibrium at high flow rates. However, for all chargers, operating conditions were identified where the bipolar charge equilibrium was achieved. Practically, excellent particle size conservation was found for all three chargers.

  4. Optimizing battery sizes of plug-in hybrid and extended range electric vehicles for different user types

    International Nuclear Information System (INIS)

    Redelbach, Martin; Özdemir, Enver Doruk; Friedrich, Horst E.

    2014-01-01

    There are ambitious greenhouse gas emission (GHG) targets for the manufacturers of light duty vehicles. To reduce the GHG emissions, plug-in hybrid electric vehicle (PHEV) and extended range electric vehicle (EREV) are promising powertrain technologies. However, the battery is still a very critical component due to the high production cost and heavy weight. This paper introduces a holistic approach for the optimization of the battery size of PHEVs and EREVs under German market conditions. The assessment focuses on the heterogeneity across drivers, by analyzing the impact of different driving profiles on the optimal battery setup from total cost of ownership (TCO) perspective. The results show that the battery size has a significant effect on the TCO. For an average German driver (15,000 km/a), battery capacities of 4 kWh (PHEV) and 6 kWh (EREV) would be cost optimal by 2020. However, these values vary strongly with the driving profile of the user. Moreover, the optimal battery size is also affected by external factors, e.g. electricity and fuel prices or battery production cost. Therefore, car manufacturers should develop a modular design for their batteries, which allows adapting the storage capacity to meet the individual customer requirements instead of “one size fits all”. - Highlights: • Optimization of the battery size of PHEVs and EREVs under German market conditions. • Focus on heterogeneity across drivers (e.g. mileage, trip distribution, speed). • Optimal battery size strongly depends on the driving profile and energy prices. • OEMs require a modular design for their batteries to meet individual requirements

  5. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  6. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile); Camilla, S. [Departamento de Física, Universidad Tecnológica Metropolitana (Chile)

    2016-07-07

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the reference material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.

  8. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  9. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    Science.gov (United States)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  10. Specified assurance level sampling procedure

    International Nuclear Information System (INIS)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level

  11. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Characterization of a stream sediment matrix material for sampling behavior in order to use it as a CRM

    International Nuclear Information System (INIS)

    Huang Donghui; Xiao Caijin; Ni Bangfa; Tian Weizhi; Zhang Yuanxun; Wang Pingsheng; Liu Cunxiong; Zhang Guiying

    2010-01-01

    Sampling behavior of multielements in a stream sediment matrix was studied with sample sizes in a range of 9 orders of magnitude by a combination of INAA, PIXE and SR-XRF. For accurately weighable sample sizes (>1 mg), sampling uncertainties for 16 elements are better than 1% by INAA. For sample sizes that cannot be accurately weighed (<1 mg), PIXE and SR-XRF were used and the effective sample sizes were estimated. Sampling uncertainties for seven elements are better than 1% at sample sizes of tenth mg level, and that for three elements are better than 10% on ng levels.

  13. Automatic particle-size analysis of HTGR recycle fuel

    International Nuclear Information System (INIS)

    Mack, J.E.; Pechin, W.H.

    1977-09-01

    An automatic particle-size analyzer was designed, fabricated, tested, and put into operation measuring and counting HTGR recycle fuel particles. The particle-size analyzer can be used for particles in all stages of fabrication, from the loaded, uncarbonized weak acid resin up to fully-coated Biso or Triso particles. The device handles microspheres in the range of 300 to 1000 μm at rates up to 2000 per minute, measuring the diameter of each particle to determine the size distribution of the sample, and simultaneously determining the total number of particles. 10 figures

  14. Avalanching Systems with Longer Range Connectivity: Occurrence of a Crossover Phenomenon and Multifractal Finite Size Scaling

    Directory of Open Access Journals (Sweden)

    Simone Benella

    2017-07-01

    Full Text Available Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in many real systems, we have evidence for longer range connectivity and a complex topology of the interacting structures. Here, we investigate the role that longer range connectivity might play in near scale-invariant systems, by analyzing the results of a sand pile cellular automaton model on a Newman–Watts network. The analysis clearly indicates the occurrence of a crossover phenomenon in the statistics of the relaxation events as a function of the percentage of longer range links and the breaking of the simple Finite Size Scaling (FSS. The more complex nature of the dynamics in the presence of long-range connectivity is investigated in terms of multi-scaling features and analyzed by the Rank-Ordered Multifractal Analysis (ROMA.

  15. Direct on-strip analysis of size- and time-resolved aerosol impactor samples using laser induced fluorescence spectra excited at 263 and 351 nm

    International Nuclear Information System (INIS)

    Wang, Chuji; Pan, Yong-Le; James, Deryck; Wetmore, Alan E.; Redding, Brandon

    2014-01-01

    Highlights: • A dual wavelength UV-LIF spectra-rotating drum impactor (RDI) technique was developed. • The technique was demonstrated by direct on-strip analysis of size- and time-resolved LIF spectra of atmospheric aerosol particles. • More than 2000 LIF spectra of atmospheric aerosol particles collected over three weeks in Djibouti were obtained and assigned to various fluorescence clusters. • The LIF spectra showed size- and time-sensitivity behavior with a time resolution of 3.6 h. - Abstract: We report a novel atmospheric aerosol characterization technique, in which dual wavelength UV laser induced fluorescence (LIF) spectrometry marries an eight-stage rotating drum impactor (RDI), namely UV-LIF-RDI, to achieve size- and time-resolved analysis of aerosol particles on-strip. The UV-LIF-RDI technique measured LIF spectra via direct laser beam illumination onto the particles that were impacted on a RDI strip with a spatial resolution of 1.2 mm, equivalent to an averaged time resolution in the aerosol sampling of 3.6 h. Excited by a 263 nm or 351 nm laser, more than 2000 LIF spectra within a 3-week aerosol collection time period were obtained from the eight individual RDI strips that collected particles in eight different sizes ranging from 0.09 to 10 μm in Djibouti. Based on the known fluorescence database from atmospheric aerosols in the US, the LIF spectra obtained from the Djibouti aerosol samples were found to be dominated by fluorescence clusters 2, 5, and 8 (peaked at 330, 370, and 475 nm) when excited at 263 nm and by fluorescence clusters 1, 2, 5, and 6 (peaked at 390 and 460 nm) when excited at 351 nm. Size- and time-dependent variations of the fluorescence spectra revealed some size and time evolution behavior of organic and biological aerosols from the atmosphere in Djibouti. Moreover, this analytical technique could locate the possible sources and chemical compositions contributing to these fluorescence clusters. Advantages, limitations, and

  16. Direct on-strip analysis of size- and time-resolved aerosol impactor samples using laser induced fluorescence spectra excited at 263 and 351 nm

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chuji [U.S. Army Research Laboratory, Adelphi, MD 20783 (United States); Mississippi State University, Starkville, MS, 39759 (United States); Pan, Yong-Le, E-mail: yongle.pan.civ@mail.mil [U.S. Army Research Laboratory, Adelphi, MD 20783 (United States); James, Deryck; Wetmore, Alan E. [U.S. Army Research Laboratory, Adelphi, MD 20783 (United States); Redding, Brandon [Yale University, New Haven, CT 06510 (United States)

    2014-04-01

    Highlights: • A dual wavelength UV-LIF spectra-rotating drum impactor (RDI) technique was developed. • The technique was demonstrated by direct on-strip analysis of size- and time-resolved LIF spectra of atmospheric aerosol particles. • More than 2000 LIF spectra of atmospheric aerosol particles collected over three weeks in Djibouti were obtained and assigned to various fluorescence clusters. • The LIF spectra showed size- and time-sensitivity behavior with a time resolution of 3.6 h. - Abstract: We report a novel atmospheric aerosol characterization technique, in which dual wavelength UV laser induced fluorescence (LIF) spectrometry marries an eight-stage rotating drum impactor (RDI), namely UV-LIF-RDI, to achieve size- and time-resolved analysis of aerosol particles on-strip. The UV-LIF-RDI technique measured LIF spectra via direct laser beam illumination onto the particles that were impacted on a RDI strip with a spatial resolution of 1.2 mm, equivalent to an averaged time resolution in the aerosol sampling of 3.6 h. Excited by a 263 nm or 351 nm laser, more than 2000 LIF spectra within a 3-week aerosol collection time period were obtained from the eight individual RDI strips that collected particles in eight different sizes ranging from 0.09 to 10 μm in Djibouti. Based on the known fluorescence database from atmospheric aerosols in the US, the LIF spectra obtained from the Djibouti aerosol samples were found to be dominated by fluorescence clusters 2, 5, and 8 (peaked at 330, 370, and 475 nm) when excited at 263 nm and by fluorescence clusters 1, 2, 5, and 6 (peaked at 390 and 460 nm) when excited at 351 nm. Size- and time-dependent variations of the fluorescence spectra revealed some size and time evolution behavior of organic and biological aerosols from the atmosphere in Djibouti. Moreover, this analytical technique could locate the possible sources and chemical compositions contributing to these fluorescence clusters. Advantages, limitations, and

  17. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  18. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  19. Outdoor stocking density in free-range laying hens: radio-frequency identification of impacts on range use.

    Science.gov (United States)

    Campbell, D L M; Hinch, G N; Dyall, T R; Warin, L; Little, B A; Lee, C

    2017-01-01

    The number and size of free-range laying hen (Gallus gallus domesticus) production systems are increasing within Australia in response to consumer demand for perceived improvement in hen welfare. However, variation in outdoor stocking density has generated consumer dissatisfaction leading to the development of a national information standard on free-range egg labelling by the Australian Consumer Affairs Ministers. The current Australian Model Code of Practice for Domestic Poultry states a guideline of 1500 hens/ha, but no maximum density is set. Radio-frequency identification (RFID) tracking technology was used to measure daily range usage by individual ISA Brown hens housed in six small flocks (150 hens/flock - 50% of hens tagged), each with access to one of three outdoor stocking density treatments (two replicates per treatment: 2000, 10 000, 20 000 hens/ha), from 22 to 26, 27 to 31 and 32 to 36 weeks of age. There was some variation in range usage across the sampling periods and by weeks 32 to 36 individual hens from the lowest stocking density on average used the range for longer each day (Prange with 2% of tagged hens in each treatment never venturing outdoors and a large proportion that accessed the range daily (2000 hens/ha: 80.5%; 10 000 hens/ha: 66.5%; 20 000 hens/ha: 71.4%). On average, 38% to 48% of hens were seen on the range simultaneously and used all available areas of all ranges. These results of experimental-sized flocks have implications for determining optimal outdoor stocking densities for commercial free-range laying hens but further research would be needed to determine the effects of increased range usage on hen welfare.

  20. Fruit size and sampling sites affect on dormancy, viability and germination of teak (Tectona grandis L.) seeds

    International Nuclear Information System (INIS)

    Akram, M.; Aftab, F.

    2016-01-01

    In the present study, fruits (drupes) were collected from Changa Manga Forest Plus Trees (CMF-PT), Changa Manga Forest Teak Stand (CMF-TS) and Punjab University Botanical Gardens (PUBG) and categorized into very large (= 17 mm dia.), large (12-16 mm dia.), medium (9-11 mm dia.) or small (6-8 mm dia.) fruit size grades. Fresh water as well as mechanical scarification and stratification were tested for breaking seed dormancy. Viability status of seeds was estimated by cutting test, X-rays and In vitro seed germination. Out of 2595 fruits from CMF-PT, 500 fruits were of very large grade. This fruit category also had highest individual fruit weight (0.58 g) with more number of 4-seeded fruits (5.29 percent) and fair germination potential (35.32 percent). Generally, most of the fruits were 1-seeded irrespective of size grades and sampling sites. Fresh water scarification had strong effect on germination (44.30 percent) as compared to mechanical scarification and cold stratification after 40 days of sowing. Similarly, sampling sites and fruit size grades also had significant influence on germination. Highest germination (82.33 percent) was obtained on MS (Murashige and Skoog) agar-solidified medium as compared to Woody Plant Medium (WPM) (69.22 percent). Seedlings from all the media were transferred to ex vitro conditions in the greenhouse and achieved highest survival (28.6 percent) from seedlings previously raised on MS agar-solidified medium after 40 days. There was an association between the studied parameters of teak seeds and the sampling sites and fruit size. (author)

  1. Sample-size resonance, ferromagnetic resonance and magneto-permittivity resonance in multiferroic nano-BiFeO3/paraffin composites at room temperature

    International Nuclear Information System (INIS)

    Wang, Lei; Li, Zhenyu; Jiang, Jia; An, Taiyu; Qin, Hongwei; Hu, Jifan

    2017-01-01

    In the present work, we demonstrate that ferromagnetic resonance and magneto-permittivity resonance can be observed in appropriate microwave frequencies at room temperature for multiferroic nano-BiFeO 3 /paraffin composite sample with an appropriate sample-thickness (such as 2 mm). Ferromagnetic resonance originates from the room-temperature weak ferromagnetism of nano-BiFeO 3 . The observed magneto-permittivity resonance in multiferroic nano-BiFeO 3 is connected with the dynamic magnetoelectric coupling through Dzyaloshinskii–Moriya (DM) magnetoelectric interaction or the combination of magnetostriction and piezoelectric effects. In addition, we experimentally observed the resonance of negative imaginary permeability for nano BiFeO 3 /paraffin toroidal samples with longer sample thicknesses D=3.7 and 4.9 mm. Such resonance of negative imaginary permeability belongs to sample-size resonance. - Highlights: • Nano-BiFeO 3 /paraffin composite shows a ferromagnetic resonance. • Nano-BiFeO 3 /paraffin composite shows a magneto-permittivity resonance. • Resonance of negative imaginary permeability in BiFeO 3 is a sample-size resonance. • Nano-BiFeO 3 /paraffin composite with large thickness shows a sample-size resonance.

  2. The Effect of Sterilization on Size and Shape of Fat Globules in Model Processed Cheese Samples

    Directory of Open Access Journals (Sweden)

    B. Tremlová

    2006-01-01

    Full Text Available Model cheese samples from 4 independent productions were heat sterilized (117 °C, 20 minutes after the melting process and packing with an aim to prolong their durability. The objective of the study was to assess changes in the size and shape of fat globules due to heat sterilization by using image analysis methods. The study included a selection of suitable methods of preparation mounts, taking microphotographs and making overlays for automatic processing of photographs by image analyser, ascertaining parameters to determine the size and shape of fat globules and statistical analysis of results obtained. The results of the experiment suggest that changes in shape of fat globules due to heat sterilization are not unequivocal. We found that the size of fat globules was significantly increased (p < 0.01 due to heat sterilization (117 °C, 20 min, and the shares of small fat globules (up to 500 μm2, or 100 μm2 in the samples of heat sterilized processed cheese were decreased. The results imply that the image analysis method is very useful when assessing the effect of technological process on the quality of processed cheese quality.

  3. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  4. Dust generation in powders: Effect of particle size distribution

    Directory of Open Access Journals (Sweden)

    Chakravarty Somik

    2017-01-01

    Full Text Available This study explores the relationship between the bulk and grain-scale properties of powders and dust generation. A vortex shaker dustiness tester was used to evaluate 8 calcium carbonate test powders with median particle sizes ranging from 2μm to 136μm. Respirable aerosols released from the powder samples were characterised by their particle number and mass concentrations. All the powder samples were found to release respirable fractions of dust particles which end up decreasing with time. The variation of powder dustiness as a function of the particle size distribution was analysed for the powders, which were classified into three groups based on the fraction of particles within the respirable range. The trends we observe might be due to the interplay of several mechanisms like de-agglomeration and attrition and their relative importance.

  5. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    Directory of Open Access Journals (Sweden)

    Manan Gupta

    Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates

  6. Influence of secular trends and sample size on reference equations for lung function tests.

    Science.gov (United States)

    Quanjer, P H; Stocks, J; Cole, T J; Hall, G L; Stanojevic, S

    2011-03-01

    The aim of our study was to determine the contribution of secular trends and sample size to lung function reference equations, and establish the number of local subjects required to validate published reference values. 30 spirometry datasets collected between 1978 and 2009 provided data on healthy, white subjects: 19,291 males and 23,741 females aged 2.5-95 yrs. The best fit for forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC as functions of age, height and sex were derived from the entire dataset using GAMLSS. Mean z-scores were calculated for individual datasets to determine inter-centre differences. This was repeated by subdividing one large dataset (3,683 males and 4,759 females) into 36 smaller subsets (comprising 18-227 individuals) to preclude differences due to population/technique. No secular trends were observed and differences between datasets comprising >1,000 subjects were small (maximum difference in FEV(1) and FVC from overall mean: 0.30- -0.22 z-scores). Subdividing one large dataset into smaller subsets reproduced the above sample size-related differences and revealed that at least 150 males and 150 females would be necessary to validate reference values to avoid spurious differences due to sampling error. Use of local controls to validate reference equations will rarely be practical due to the numbers required. Reference equations derived from large or collated datasets are recommended.

  7. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    International Nuclear Information System (INIS)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-01-01

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  8. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  9. Sampling considerations when analyzing micrometric-sized particles in a liquid jet using laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Faye, C.B.; Amodeo, T.; Fréjafon, E. [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France); Delepine-Gilon, N. [Institut des Sciences Analytiques, 5 rue de la Doua, 69100 Villeurbanne (France); Dutouquet, C., E-mail: christophe.dutouquet@ineris.fr [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France)

    2014-01-01

    Pollution of water is a matter of concern all over the earth. Particles are known to play an important role in the transportation of pollutants in this medium. In addition, the emergence of new materials such as NOAA (Nano-Objects, their Aggregates and their Agglomerates) emphasizes the need to develop adapted instruments for their detection. Surveillance of pollutants in particulate form in waste waters in industries involved in nanoparticle manufacturing and processing is a telling example of possible applications of such instrumental development. The LIBS (laser-induced breakdown spectroscopy) technique coupled with the liquid jet as sampling mode for suspensions was deemed as a potential candidate for on-line and real time monitoring. With the final aim in view to obtain the best detection limits, the interaction of nanosecond laser pulses with the liquid jet was examined. The evolution of the volume sampled by laser pulses was estimated as a function of the laser energy applying conditional analysis when analyzing a suspension of micrometric-sized particles of borosilicate glass. An estimation of the sampled depth was made. Along with the estimation of the sampled volume, the evolution of the SNR (signal to noise ratio) as a function of the laser energy was investigated as well. Eventually, the laser energy and the corresponding fluence optimizing both the sampling volume and the SNR were determined. The obtained results highlight intrinsic limitations of the liquid jet sampling mode when using 532 nm nanosecond laser pulses with suspensions. - Highlights: • Micrometric-sized particles in suspensions are analyzed using LIBS and a liquid jet. • The evolution of the sampling volume is estimated as a function of laser energy. • The sampling volume happens to saturate beyond a certain laser fluence. • Its value was found much lower than the beam diameter times the jet thickness. • Particles proved not to be entirely vaporized.

  10. What Makes Jessica Rabbit Sexy? Contrasting Roles of Waist and Hip Size

    Directory of Open Access Journals (Sweden)

    William D. Lassek

    2016-04-01

    Full Text Available While waist/hip ratio (WHR and body mass index (BMI have been the most studied putative determinants of female bodily attractiveness, BMI is not directly observable, and few studies have considered the independent roles of waist and hip size. The range of attractiveness in many studies is also quite limited, with none of the stimuli rated as highly attractive. To explore the relationships of these anthropometric parameters with attractiveness across a much broader spectrum of attractiveness, we employ three quite different samples: a large sample of college women, a larger sample of Playboy Playmates of the Month than that has been previously examined, and a large pool of imaginary women (e.g., cartoon, video game, graphic novel characters chosen as the “most attractive” by university students. Within-sample and between-sample comparisons agree in indicating that waist size is the key determinant of female bodily attractiveness and accounts for the relationship of both BMI and WHR with attractiveness, with between-sample effect sizes of 2.4–3.2. In contrast, hip size is much more similar across attractiveness groups and is unrelated to attractiveness when BMI or waist size is controlled.

  11. Tamanho de amostra para caracterização morfológica de frutos de pimenteira Sample size for morphological characterization of pepper fruits

    Directory of Open Access Journals (Sweden)

    AR Silva

    2011-03-01

    Full Text Available O objetivo deste estudo foi determinar o tamanho apropriado de amostra por meio da técnica de simulação de subamostras para a caracterização de variáveis morfológicas de frutos de oito acessos (variedades de quatro espécies de pimenteira (Capsicum spp., que foram cultivadas em área experimental da UFPB. Foram analisados tamanhos reduzidos de amostras, variando de 3 a 29 frutos, com 100 amostras para cada tamanho simulado em um processo de amostragem com reposição de dados. Realizou-se análise de variância para os números mínimos de frutos por amostra que representasse a amostra de referência (30 frutos em cada variável estudada, constituindo um delineamento experimental inteiramente casualizado com duas repetições, onde cada dado representou o primeiro número de frutos na amostra simulada que não apresentou nenhum valor fora do intervalo de confiança da amostra de referência e que assim manteve-se até a última subamostra da simulação. A técnica de simulação utilizada permitiu obter, com a mesma precisão da amostra de 30 frutos, reduções do tamanho amostral em torno de 50%, dependendo da variável morfológica, não havendo diferenças entre os acessos.The appropriate sample size for the evaluation of morphological fruit traits of pepper was evaluated through a technique of simulation of subsamples. The treatments consisted of eight accessions of four pepper species (Capsicum spp., cultivated in an experimental area of the Universidade Federal da Paraíba. Small samples, ranging from 3 to 29 fruits were evaluated. For each sample size, 100 subsamples were simulated with data replacement. The data were submitted to analysis of variance, in a complete randomized design, for the minimum number of fruits per sample. Each collected data consisted of the first number of fruits in the simulated sample without values out of the confidence interval. This procedure was done up to the last subsample simulation. The

  12. Mechanobiological induction of long-range contractility by diffusing biomolecules and size scaling in cell assemblies

    Science.gov (United States)

    Dasbiswas, K.; Alster, E.; Safran, S. A.

    2016-06-01

    Mechanobiological studies of cell assemblies have generally focused on cells that are, in principle, identical. Here we predict theoretically the effect on cells in culture of locally introduced biochemical signals that diffuse and locally induce cytoskeletal contractility which is initially small. In steady-state, both the concentration profile of the signaling molecule as well as the contractility profile of the cell assembly are inhomogeneous, with a characteristic length that can be of the order of the system size. The long-range nature of this state originates in the elastic interactions of contractile cells (similar to long-range “macroscopic modes” in non-living elastic inclusions) and the non-linear diffusion of the signaling molecules, here termed mechanogens. We suggest model experiments on cell assemblies on substrates that can test the theory as a prelude to its applicability in embryo development where spatial gradients of morphogens initiate cellular development.

  13. [Altitudinal patterns of species richness and species range size of vascular plants in Xiaolong- shan Reserve of Qinling Mountain: a test of Rapoport' s rule].

    Science.gov (United States)

    Zheng, Zhi; Gong, Da-Jie; Sun, Cheng-Xiang; Li, Xiao-Jun; Li, Wan-Jiang

    2014-09-01

    Altitudinal patterns of species richness and species range size and their underlying mechanisms have long been a key topic in biogeography and biodiversity research. Rapoport's rule stated that the species richness gradually declined with the increasing altitude, while the species ranges became larger. Using altitude-distribution database from Xiaolongshan Reverse, this study explored the altitudinal patterns of vascular plant species richness and species range in Qinling Xiaolongshan Reserve, and examined the relationships between species richness and their distributional middle points in altitudinal bands for different fauna, taxonomic units and growth forms and tested the Rapoport's rule by using Stevens' method, Pagel's method, mid-point method and cross-species method. The results showed that the species richness of vascular plants except small-range species showed a unimodal pattern along the altitude in Qinling Xiaolongshan Reserve and the highest proportion of small-range species was found at the lower altitudinal bands and at the higher altitudinal bands. Due to different assemblages and examining methods, the relationships between species distributing range sizes and the altitudes were different. Increasing taxonomic units was easier to support Rapoport's rule, which was related to niche differences that the different taxonomic units occupied. The mean species range size of angiosperms showed a unimodal pattern along the altitude, while those of the gymnosperms and pteridophytes were unclearly regular. The mean species range size of the climbers was wider with the increasing altitude, while that of the shrubs which could adapt to different environmental situations was not sensitive to the change of altitude. Pagel's method was easier to support the Rapoport's rule, and then was Steven's method. On the contrary, due to the mid-domain effect, the results of the test by using the mid-point method showed that the mean species range size varied in a unimodal

  14. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    Directory of Open Access Journals (Sweden)

    Peter Feist

    2015-02-01

    Full Text Available Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed.

  15. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    Science.gov (United States)

    Feist, Peter; Hummon, Amanda B.

    2015-01-01

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed. PMID:25664860

  16. Proteomic challenges: sample preparation techniques for microgram-quantity protein analysis from biological samples.

    Science.gov (United States)

    Feist, Peter; Hummon, Amanda B

    2015-02-05

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed.

  17. Sampling and instrumentation requirements for long-range D and D activities at INEL

    International Nuclear Information System (INIS)

    Ahlquist, A.J.

    1985-01-01

    Assistance was requested to help determine sampling and instrumentation requirements for the long-range decontamination and decommissioning activities at the Idaho National Engineering Laboratory. Through a combination of literature review, visits to other DOE contractors, and a determination of the needs for the INEL program, a draft report has been prepared that is now under review. The final report should be completed in FY 84

  18. Droplet Size-Aware and Error-Correcting Sample Preparation Using Micro-Electrode-Dot-Array Digital Microfluidic Biochips.

    Science.gov (United States)

    Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi

    2017-12-01

    Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.

  19. Influence of crystallite size on the magnetic properties of Fe{sub 3}O{sub 4} nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyay, Sneha [Dept of Applied Science, Symbiosis Institute of Technology, SIU, Lavale, Mulshi, Pune 412 115 (India); Parekh, Kinnari [K C Patel R & D Center, Charotar University of Science & Technology, Changa 388421 (India); Pandey, Brajesh, E-mail: bpandey@gmail.com [Dept of Applied Science, Symbiosis Institute of Technology, SIU, Lavale, Mulshi, Pune 412 115 (India)

    2016-09-05

    Structural and magnetic properties of chemically synthesized magnetite nanoparticles have been studied using X-ray diffraction, Transmission Electron Microscopy and Vibrating Sample Magnetometer. Magnetically the synthesized nanoparticles are ranging from superparamagnetic to multi domain state. Average crystallite size of the synthesized magnetite nanoparticles were determined using X-ray line broadening and are found to be in the range of 9–53 nm. On the other hand, the TEM images show that the size is ranging between 7.9 and 200 nm with the transition from spherical superparamagnetic particles to faceted cubic multi domain particles. Magnetic parameters of the samples show a strong dependence on average crystallite size. The ratio of coercive field at 20 K to that at 300 K (H{sub c} (20 K)/H{sub c} (300 K)) increased sharply with decrease in crystallite size. A critical crystallite diameter of order 36 nm may be inferred as boundary between single domain to multi domain transition. Zero-field-cooled (ZFC) and field-cooled (FC) measurements at 10 Oe field validate the same for smallest and largest size samples, confirming that the anisotropy energy is greater than thermal energy upto 300 K temperature. For 9 nm sample broad ZFC curve with overlapping of FC curve is observed just at 300 K, indicating the effect of strong dipolar field in superparamagnetic system. - Graphical abstract: We present our study on magnetite nanoparticles. We observed that the synthesized nanoparticles behave like single domain particles in the range of 14 nm–36 nm. They show superparamagnetic properties if particles are smaller than 14 nm and multi-domain properties when the particles are bigger than 36 nm. - Highlights: • Magnetite nanoparticles have been synthesized using chemical precipitation method. • Smaller magnetite particles below 14 nm in size are in super-paramagnetic state. • Bigger particles show multi-domain character. • Magnetite in the size range 14–36 is

  20. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety.

    Science.gov (United States)

    Kikuchi, Takashi; Gittins, John

    2009-08-15

    It is necessary for the calculation of sample size to achieve the best balance between the cost of a clinical trial and the possible benefits from a new treatment. Gittins and Pezeshk developed an innovative (behavioral Bayes) approach, which assumes that the number of users is an increasing function of the difference in performance between the new treatment and the standard treatment. The better a new treatment, the more the number of patients who want to switch to it. The optimal sample size is calculated in this framework. This BeBay approach takes account of three decision-makers, a pharmaceutical company, the health authority and medical advisers. Kikuchi, Pezeshk and Gittins generalized this approach by introducing a logistic benefit function, and by extending to the more usual unpaired case, and with unknown variance. The expected net benefit in this model is based on the efficacy of the new drug but does not take account of the incidence of adverse reactions. The present paper extends the model to include the costs of treating adverse reactions and focuses on societal cost-effectiveness as the criterion for determining sample size. The main application is likely to be to phase III clinical trials, for which the primary outcome is to compare the costs and benefits of a new drug with a standard drug in relation to national health-care. Copyright 2009 John Wiley & Sons, Ltd.

  1. Experimental investigation about attachment processes of atoms and ions in the size range < 0.1 μm

    International Nuclear Information System (INIS)

    Porstendoerfer, J.; Mercer, T.T.

    1977-01-01

    Results of an investigation of the attachment process of atoms and ion in the size range between 0.009 to 4 μm on a particle or droplet surface are presented. It is again shown that the experimental values are adequately predicted by the diffusion attachment theory under gas kinetic consideration, if the sticking probability of Rn and Tn decay products is S = 1. 12 references

  2. The Size Spectrum as Tool for Analyzing Marine Plastic Pollution

    KAUST Repository

    Martí, E.

    2016-12-02

    Marine plastic debris spans over six orders of magnitude in lineal size, from microns to meters. The broad range of plastic sizes mainly arises from the continuous photodegradation and fragmentation affecting the plastic objects. Interestingly, this time-dependent process links, to some degree, the size to the age of the debris. The variety of plastic sizes gives the possibility to marine biota to interact and possible take up microplastics through numerous pathways. Physical processes such as sinking and wind-induced transport or the chemical adsorption of contaminants are also closely related to the size and shape of the plastic items. Likewise, available sampling techniques should be considered as partial views of the marine plastic size range. This being so and given that the size is one of the most easily measurable plastic traits, the size spectrum appears as an ideal frame to arrange, integrate, and analyze plastic data of diverse nature. In this work, we examined tens of thousands of plastic items sampled from across the world with the aim of (1) developing and standardizing the size-spectrum tool to study marine plastics, and (2) assembling a global plastic size spectrum (GPSS) database, relating individual size measurements to abundance, color (129 tons), polymer type, and category (rigid fragments, films, threads, foam, pellets, and microbeads). Using GPSS database, we show for instance the dependence of plastic composition on the item size, with high diversity of categories for items larger than 1 cm and a clear dominance (~90%) of hard fragments below, except for the size interval corresponding to microbeads (around 0.5 mm). GPSS database depicts a comprehensive size-based framework for analyzing the marine plastic pollution, enabling the comparison of size-related studies or the testing of hypothesis.

  3. Validity of Dynamic Light Scattering Method to Analyze a Range of Gold and Copper Nanoparticle Sizes Attained by Solids Laser Ablation in Liquid

    Directory of Open Access Journals (Sweden)

    Yu. V. Golubenko

    2014-01-01

    Full Text Available Nanoparticles of metals possess a whole series of features, concerned with it’s sizes, this leads to appearing or unusual electromagnetic and optical properties, which are untypical for particulates.An extended method of receiving nanoparticles by means of laser radiation is pulse laser ablation of hard targets in liquid medium.Varying the parameters of laser radiation, such as wavelength of laser radiation, energy density, etc., we can operate the size and shape of the resultant particles.The greatest trend of application in medicine have the nanoparticles of iron, copper, silver, silicon, magnesium, gold and zinc.The subject matter in this work is nanoparticles of copper and gold, received by means of laser ablation of hard targets in liquid medium.The aim of exploration, represented in the article, is the estimation of application of the dynamic light scattering method for determination of the range of nanoparticles sizes in the colloidal solution.For studying of the laser ablation process was chosen the second harmonic of Nd:YAG laser with the wavelength of 532 nm. Special attention was spared for the description of the experiment technique of receiving of nanoparticles.As the liquid medium ethanol and distillation water were used.For exploration of the received colloidal system have been used the next methods: DLS, transmission electron microscopy (TEM and scanning electron microscopy (SEM.The results of measuring by DLS method showed that colloidal solution of the copper in the ethanol is the steady system. Copper nanoparticle’s size reaches 200 nm and is staying in the same size for some time.Received system from the gold’s nanoparticles is polydisperse, unsteady and has a big range of the nanoparticle’s sizes. This fact was confirmed by means of photos, got from the TEM FEI Tecnai G2F20 + GIF and SEM Helios NanoLab 660. The range of the gold nanoparticle’s sizes is from 5 to 60 nm. So, it has been proved that the DLS method is

  4. Ion generation and CPC detection efficiency studies in sub 3-nm size range

    Energy Technology Data Exchange (ETDEWEB)

    Kangasluoma, J.; Junninen, H.; Sipilae, M.; Kulmala, M.; Petaejae, T. [Department of Physics, P.O. Box 64, 00014, University of Helsinki, Helsinki (Finland); Lehtipalo, K. [Department of Physics, P.O. Box 64, 00014, University of Helsinki, Helsinki (Finland); Airmodus Ltd., Finland, Gustaf Haellstroemin katu 2 A, 00560 Helsinki (Finland); Mikkilae, J.; Vanhanen, J. [Airmodus Ltd., Finland, Gustaf Haellstroemin katu 2 A, 00560 Helsinki (Finland); Attoui, M. [University Paris Est Creteil, University Paris-Diderot, LISA, UMR CNRS 7583 (France); Worsnop, D. [Department of Physics, P.O. Box 64, 00014, University of Helsinki, Helsinki (Finland) and Aerodyne Research Inc., Billerica, MA (United States)

    2013-05-24

    We studied the chemical composition of commonly used condensation particle counter calibration ions with a mass spectrometer and found that in our calibration setup the negatively charged ammonium sulphate, sodium chloride and tungsten oxide are the least contaminated whereas silver on both positive and negative and the three mentioned earlier in positive mode are contaminated with organics. We report cut-off diameters for Airmodus Particle Size Magnifier (PSM) 1.1, 1.3, 1.4, 1.6 and 1.6-1.8 nm for negative sodium chloride, ammonium sulphate, tungsten oxide, silver and positive organics, respectively. To study the effect of sample relative humidity on detection efficiency of the PSM we used different humidities in the differential mobility analyzer sheath flow and found that with increasing relative humidity also the detection efficiency of the PSM increases.

  5. Ion generation and CPC detection efficiency studies in sub 3-nm size range

    International Nuclear Information System (INIS)

    Kangasluoma, J.; Junninen, H.; Sipilä, M.; Kulmala, M.; Petäjä, T.; Lehtipalo, K.; Mikkilä, J.; Vanhanen, J.; Attoui, M.; Worsnop, D.

    2013-01-01

    We studied the chemical composition of commonly used condensation particle counter calibration ions with a mass spectrometer and found that in our calibration setup the negatively charged ammonium sulphate, sodium chloride and tungsten oxide are the least contaminated whereas silver on both positive and negative and the three mentioned earlier in positive mode are contaminated with organics. We report cut-off diameters for Airmodus Particle Size Magnifier (PSM) 1.1, 1.3, 1.4, 1.6 and 1.6-1.8 nm for negative sodium chloride, ammonium sulphate, tungsten oxide, silver and positive organics, respectively. To study the effect of sample relative humidity on detection efficiency of the PSM we used different humidities in the differential mobility analyzer sheath flow and found that with increasing relative humidity also the detection efficiency of the PSM increases.

  6. Sonographically guided fine-needle biopsy of thyroid nodules: the effects of nodule characteristics, sampling technique, and needle size on the adequacy of cytological material

    International Nuclear Information System (INIS)

    Degirmenci, B.; Haktanir, A.; Albayrak, R.; Acar, M.; Sahin, D.A.; Sahin, O.; Yucel, A.; Caliskan, G.

    2007-01-01

    Aim: To evaluate the effects of sonographic characteristics of thyroid nodules, the diameter of needle used for sampling, and sampling technique on obtaining sufficient cytological material (SCM). Materials and methods: We performed sonography-guided fine-needle biopsy (FNB) in 232 solid thyroid nodules. Size-, echogenicity, vascularity, and localization of all nodules were evaluated by Doppler sonography before the biopsy. Needles of size 20, 22, and 24 G were used for biopsy. The biopsy specimen was acquired using two different methods after localisation. In first method, the needle tip was advanced into the nodule in various positions using a to-and-fro motion whilst in the nodule, along with concurrent aspiration. In the second method, the needle was advanced vigorously using a to-and-fro motion within the nodule whilst being rotated on its axis (capillary-action technique). Results: The mean nodule size was 2.1 ± 1.3 cm (range 0.4-7.2 cm). SCM was acquired from 154 (66.4%) nodules by sonography-guided FNB. In 78 (33.6%) nodules, SCM could not be collected. There was no significant difference between nodules with different echogenicity and vascularity for SCM. Regarding the needle size, the lowest rate of SCM was obtained using 20 G needles (56.6%) and the highest rate of adequate material was obtained using 24 G needles (82.5%; p = 0.001). The SCM rate was 76.9% with the capillary-action technique versus 49.4% with the aspiration technique (p < 0.001). Conclusion: Selecting finer needles (24-25 G) for sonography-guided FNB of thyroid nodules and using the capillary-action technique decreased the rate of inadequate material in cytological examination

  7. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

    Science.gov (United States)

    Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

    2017-10-01

    Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

  8. Measurement of peak impact loads differ between accelerometers - Effects of system operating range and sampling rate.

    Science.gov (United States)

    Ziebart, Christina; Giangregorio, Lora M; Gibbs, Jenna C; Levine, Iris C; Tung, James; Laing, Andrew C

    2017-06-14

    A wide variety of accelerometer systems, with differing sensor characteristics, are used to detect impact loading during physical activities. The study examined the effects of system characteristics on measured peak impact loading during a variety of activities by comparing outputs from three separate accelerometer systems, and by assessing the influence of simulated reductions in operating range and sampling rate. Twelve healthy young adults performed seven tasks (vertical jump, box drop, heel drop, and bilateral single leg and lateral jumps) while simultaneously wearing three tri-axial accelerometers including a criterion standard laboratory-grade unit (Endevco 7267A) and two systems primarily used for activity-monitoring (ActiGraph GT3X+, GCDC X6-2mini). Peak acceleration (gmax) was compared across accelerometers, and errors resulting from down-sampling (from 640 to 100Hz) and range-limiting (to ±6g) the criterion standard output were characterized. The Actigraph activity-monitoring accelerometer underestimated gmax by an average of 30.2%; underestimation by the X6-2mini was not significant. Underestimation error was greater for tasks with greater impact magnitudes. gmax was underestimated when the criterion standard signal was down-sampled (by an average of 11%), range limited (by 11%), and by combined down-sampling and range-limiting (by 18%). These effects explained 89% of the variance in gmax error for the Actigraph system. This study illustrates that both the type and intensity of activity should be considered when selecting an accelerometer for characterizing impact events. In addition, caution may be warranted when comparing impact magnitudes from studies that use different accelerometers, and when comparing accelerometer outputs to osteogenic impact thresholds proposed in literature. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  9. What big size you have! Using effect sizes to determine the impact of public health nursing interventions.

    Science.gov (United States)

    Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A

    2013-01-01

    The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.

  10. Expected proton signal sizes in the PRaVDA Range Telescope for proton Computed Tomography

    International Nuclear Information System (INIS)

    Price, T.; Parker, D.J.; Green, S.; Esposito, M.; Waltham, C.; Allinson, N.M.; Poludniowski, G.; Evans, P.; Taylor, J.; Manolopoulos, S.; Anaxagoras, T.; Nieto-Camero, J.

    2015-01-01

    Proton radiotherapy has demonstrated benefits in the treatment of certain cancers. Accurate measurements of the proton stopping powers in body tissues are required in order to fully optimise the delivery of such treaments. The PRaVDA Consortium is developing a novel, fully solid state device to measure these stopping powers. The PRaVDA Range Telescope (RT), uses a stack of 24 CMOS Active Pixel Sensors (APS) to measure the residual proton energy after the patient. We present here the ability of the CMOS sensors to detect changes in the signal sizes as the proton traverses the RT, compare the results with theory, and discuss the implications of these results on the reconstruction of proton tracks

  11. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

    Science.gov (United States)

    Fraley, R. Chris; Vazire, Simine

    2014-01-01

    The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

  12. The Effects of Test Length and Sample Size on Item Parameters in Item Response Theory

    Science.gov (United States)

    Sahin, Alper; Anil, Duygu

    2017-01-01

    This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…

  13. Influence of pH, Temperature and Sample Size on Natural and Enforced Syneresis of Precipitated Silica

    Directory of Open Access Journals (Sweden)

    Sebastian Wilhelm

    2015-12-01

    Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.

  14. Nanoporous anodic aluminum oxide with a long-range order and tunable cell sizes by phosphoric acid anodization on pre-patterned substrates

    Science.gov (United States)

    Surawathanawises, Krissada; Cheng, Xuanhong

    2014-01-01

    Nanoporous anodic aluminum oxide (AAO) has been explored for various applications due to its regular cell arrangement and relatively easy fabrication processes. However, conventional two-step anodization based on self-organization only allows the fabrication of a few discrete cell sizes and formation of small domains of hexagonally packed pores. Recent efforts to pre-pattern aluminum followed with anodization significantly improve the regularity and available pore geometries in AAO, while systematic study of the anodization condition, especially the impact of acid composition on pore formation guided by nanoindentation is still lacking. In this work, we pre-patterned aluminium thin films using ordered monolayers of silica beads and formed porous AAO in a single-step anodization in phosphoric acid. Controllable cell sizes ranging from 280 nm to 760 nm were obtained, matching the diameters of the silica nanobead molds used. This range of cell size is significantly greater than what has been reported for AAO formed in phosphoric acid in the literature. In addition, the relationships between the acid concentration, cell size, pore size, anodization voltage and film growth rate were studied quantitatively. The results are consistent with the theory of oxide formation through an electrochemical reaction. Not only does this study provide useful operational conditions of nanoindentation induced anodization in phosphoric acid, it also generates significant information for fundamental understanding of AAO formation. PMID:24535886

  15. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  16. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    NARCIS (Netherlands)

    van Rijnsoever, Frank J.

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in

  17. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

    International Nuclear Information System (INIS)

    Lisboa-Filho, P N; Deimling, C V; Ortiz, W A

    2010-01-01

    In this contribution superconducting specimens of YBa 2 Cu 3 O 7-δ were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

  18. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

    Energy Technology Data Exchange (ETDEWEB)

    Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)

    2010-01-15

    In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

  19. Sample-size resonance, ferromagnetic resonance and magneto-permittivity resonance in multiferroic nano-BiFeO{sub 3}/paraffin composites at room temperature

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lei; Li, Zhenyu; Jiang, Jia; An, Taiyu; Qin, Hongwei; Hu, Jifan, E-mail: hujf@sdu.edu.cn

    2017-01-01

    In the present work, we demonstrate that ferromagnetic resonance and magneto-permittivity resonance can be observed in appropriate microwave frequencies at room temperature for multiferroic nano-BiFeO{sub 3}/paraffin composite sample with an appropriate sample-thickness (such as 2 mm). Ferromagnetic resonance originates from the room-temperature weak ferromagnetism of nano-BiFeO{sub 3}. The observed magneto-permittivity resonance in multiferroic nano-BiFeO{sub 3} is connected with the dynamic magnetoelectric coupling through Dzyaloshinskii–Moriya (DM) magnetoelectric interaction or the combination of magnetostriction and piezoelectric effects. In addition, we experimentally observed the resonance of negative imaginary permeability for nano BiFeO{sub 3}/paraffin toroidal samples with longer sample thicknesses D=3.7 and 4.9 mm. Such resonance of negative imaginary permeability belongs to sample-size resonance. - Highlights: • Nano-BiFeO{sub 3}/paraffin composite shows a ferromagnetic resonance. • Nano-BiFeO{sub 3}/paraffin composite shows a magneto-permittivity resonance. • Resonance of negative imaginary permeability in BiFeO{sub 3} is a sample-size resonance. • Nano-BiFeO{sub 3}/paraffin composite with large thickness shows a sample-size resonance.

  20. Reducing sample size by combining superiority and non-inferiority for two primary endpoints in the Social Fitness study.

    Science.gov (United States)

    Donkers, Hanneke; Graff, Maud; Vernooij-Dassen, Myrra; Nijhuis-van der Sanden, Maria; Teerenstra, Steven

    2017-01-01

    In randomized controlled trials, two endpoints may be necessary to capture the multidimensional concept of the intervention and the objectives of the study adequately. We show how to calculate sample size when defining success of a trial by combinations of superiority and/or non-inferiority aims for the endpoints. The randomized controlled trial design of the Social Fitness study uses two primary endpoints, which can be combined into five different scenarios for defining success of the trial. We show how to calculate power and sample size for each scenario and compare these for different settings of power of each endpoint and correlation between them. Compared to a single primary endpoint, using two primary endpoints often gives more power when success is defined as: improvement in one of the two endpoints and no deterioration in the other. This also gives better power than when success is defined as: improvement in one prespecified endpoint and no deterioration in the remaining endpoint. When two primary endpoints are equally important, but a positive effect in both simultaneously is not per se required, the objective of having one superior and the other (at least) non-inferior could make sense and reduce sample size. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Cosmology from angular size counts of extragalactic radio sources

    International Nuclear Information System (INIS)

    Kapahi, V.K.

    1975-01-01

    The cosmological implications of the observed angular sizes of extragalactic radio sources are investigated using (i) the log N-log theta relation, where N is the number of sources with an angular size greater than a value theta, for the complete sample of 3CR sources, and (ii) the thetasub(median) vs flux density (S) relation derived from the 3CR, the All-sky, and the Ooty occulation surveys, spanning a flux density range of about 300:1. The method of estimating the expected N(theta) and thetasub(m)(S) relations for a uniform distribution of sources in space is outlined. Since values of theta>approximately 100second arc in the 3C sample arise from sources of small z, the slope of the N(theta) relation in this range is practically independent of the world model and the distribution of source sizes, but depends strongly on the radio luminosity function (RLF). From the observed slope the RLF is derived in the luminosity range of about 10 23 178 26 W Hz -1 sr -1 to be of the form rho(P)dP proportional to Psup(-2.1)dP. It is shown that the angular size data provide independent evidence of evolution in source properties with epoch. It is difficult to explain the data with the simple steady-state theory even if identified QSOs are excluded from ths source samples and a local deficiency of strong source is postulated. The simplest evolutionary scheme that fits the data in the Einstein-de Sitter cosmology indicates that (a) the local RLF steepens considerably at high luminosities, (b) the comoving density of high luminosity sources increases with z in a manner similar to that implied by the log N-log S data and by the V/Vsub(m) test for QSOs, and (c) the mean physical sizes of radio sources evolve with z approximately as (1+z) -1 . Similar evolutionary effects appear to be present for QSOs as well as radio galaxies. (author)

  2. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    Science.gov (United States)

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  3. A Nanometer Aerosol Size Analyzer (nASA) for Rapid Measurement of High-concentration Size Distributions

    International Nuclear Information System (INIS)

    Han, H.-S.; Chen, D.-R.; Pui, David Y.H.; Anderson, Bruce E.

    2000-01-01

    We have developed a fast-response nanometer aerosol size analyzer (nASA) that is capable of scanning 30 size channels between 3 and 100 nm in a total time of 3 s. The analyzer includes a bipolar charger (Po 210 ), an extended-length nanometer differential mobility analyzer (Nano-DMA), and an electrometer (TSI 3068). This combination of components provides particle size spectra at a scan rate of 0.1 s per channel free of uncertainties caused by response-time-induced smearing. The nASA thus offers a fast response for aerosol size distribution measurements in high-concentration conditions and also eliminates the need for applying a de-smearing algorithm to resulting data. In addition, because of its thermodynamically stable means of particle detection, the nASA is useful for applications requiring measurements over a broad range of sample pressures and temperatures. Indeed, experimental transfer functions determined for the extended-length Nano-DMA using the tandem differential mobility analyzer (TDMA) technique indicate the nASA provides good size resolution at pressures as low as 200 Torr. Also, as was demonstrated in tests to characterize the soot emissions from the J85-GE engine of a T-38 aircraft, the broad dynamic concentration range of the nASA makes it particularly suitable for studies of combustion or particle formation processes. Further details of the nASA performance as well as results from calibrations, laboratory tests and field applications are presented below

  4. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    Science.gov (United States)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a

  5. Effects of Sample Size and Dimensionality on the Performance of Four Algorithms for Inference of Association Networks in Metabonomics

    NARCIS (Netherlands)

    Suarez Diez, M.; Saccenti, E.

    2015-01-01

    We investigated the effect of sample size and dimensionality on the performance of four algorithms (ARACNE, CLR, CORR, and PCLRC) when they are used for the inference of metabolite association networks. We report that as many as 100-400 samples may be necessary to obtain stable network estimations,

  6. Home range and travels

    Science.gov (United States)

    Stickel, L.F.; King, John A.

    1968-01-01

    The concept of home range was expressed by Seton (1909) in the term 'home region,' which Burr (1940, 1943) clarified with a definition of home range and exemplified in a definitive study of Peromyscus in the field. Burt pointed out the ever-changing characteristics of home-range area and the consequent absence of boundaries in the usual sense--a finding verified by investigators thereafter. In the studies summarized in this paper, sizes of home ranges of Peromyscus varied within two magnitudes, approximately from 0.1 acre to ten acres, in 34 studies conducted in a variety of habitats from the seaside dunes of Florida to the Alaskan forests. Variation in sizes of home ranges was correlated with both environmental and physiological factors; with habitat it was conspicuous, both in the same and different regions. Food supply also was related to size of home range, both seasonally and in relation to habitat. Home ranges generally were smallest in winter and largest in spring, at the onset of the breeding season. Activity and size also were affected by changes in weather. Activity was least when temperatures were low and nights were bright. Effects of rainfall were variable. Sizes varied according to sex and age; young mice remained in the parents' range until they approached maturity, when they began to travel more widely. Adult males commonly had larger home ranges than females, although there were a number of exceptions. An inverse relationship between population density and size of home range was shown in several studies and probably is the usual relationship. A basic need for activity and exploration also appeared to influence size of home range. Behavior within the home range was discussed in terms of travel patterns, travels in relation to home sites and refuges, territory, and stability of size of home range. Travels within the home range consisted of repeated use of well-worn trails to sites of food, shelter, and refuge, plus more random exploratory travels

  7. Free-ranging male koalas use size-related variation in formant frequencies to assess rival males.

    Directory of Open Access Journals (Sweden)

    Benjamin D Charlton

    Full Text Available Although the use of formant frequencies in nonhuman animal vocal communication systems has received considerable recent interest, only a few studies have examined the importance of these acoustic cues to body size during intra-sexual competition between males. Here we used playback experiments to present free-ranging male koalas with re-synthesised bellow vocalisations in which the formants were shifted to simulate either a large or a small adult male. We found that male looking responses did not differ according to the size variant condition played back. In contrast, male koalas produced longer bellows and spent more time bellowing when they were presented with playbacks simulating larger rivals. In addition, males were significantly slower to respond to this class of playback stimuli than they were to bellows simulating small males. Our results indicate that male koalas invest more effort into their vocal responses when they are presented with bellows that have lower formants indicative of larger rivals, but also show that males are slower to engage in vocal exchanges with larger males that represent more dangerous rivals. By demonstrating that male koalas use formants to assess rivals during the breeding season we have provided evidence that male-male competition constitutes an important selection pressure for broadcasting and attending to size-related formant information in this species. Further empirical studies should investigate the extent to which the use of formants during intra-sexual competition is widespread throughout mammals.

  8. Dental arch dimensions, form and tooth size ratio among a Saudi sample

    Directory of Open Access Journals (Sweden)

    Haidi Omar

    2018-01-01

    Full Text Available Objectives: To determine the dental arch dimensions and arch forms in a sample of Saudi orthodontic patients, to investigate the prevalence of Bolton anterior and overall tooth size discrepancies, and to compare the effect of gender on the measured parameters. Methods: This study is a biometric analysis of dental casts of 149 young adults recruited from different orthodontic centers in Jeddah, Saudi Arabia. The dental arch dimensions were measured. The measured parameters were arch length, arch width, Bolton’s ratio, and arch form. The data were analyzed using IBM SPSS software version 22.0 (IBM Corporation, New York, USA; this cross-sectional study was conducted between April 2015 and May 2016. Results: Dental arch measurements, including inter-canine and inter-molar distance, were found to be significantly greater in males than females (p less than 0.05. The most prevalent dental arch forms were narrow tapered (50.3% and narrow ovoid (34.2%, respectively. The prevalence of tooth size discrepancy in all cases was 43.6% for anterior ratio and 24.8% for overall ratio. The mean Bolton’s anterior ratio in all malocclusion classes was 79.81%, whereas the mean Bolton’s overall ratio was 92.21%. There was no significant difference between males and females regarding Bolton’s ratio. Conclusion: The most prevalent arch form was narrow tapered, followed by narrow ovoid. Males generally had larger dental arch measurements than females, and the prevalence of tooth size discrepancy was more in Bolton’s anterior teeth ratio than in overall ratio.

  9. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  10. A Long-Term Comparison of Yellowstone Cutthroat Trout Abundance and Size Structure in Their Historical Range in Idaho.

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Kevin A.; Schill, Daniel J.; Elle, F. Steven

    2002-05-23

    We compared estimates of population abundance and size structure for Yellowstone cutthroat trout Oncorhynchus clarki bouvieri obtained by electrofishing 77 stream segments across southeastern Idaho in the 1980s and again in 1999-2000 to test whether populations of Yellowstone cutthroat trout had changed. Sites sampled in the 1980s were relocated in 1999-2000 by using maps and photographs or by finding original site-boundary stakes, so that the same reach of stream was sampled during both periods. Abundance of Yellowstone cutthroat trout longer than 10 cm did not change, averaging 41 fish/100 m of stream during both the 1980s and 1999-2000. The proportion of the total catch of trout composed of Yellowstone cutthroat trout also did not change, averaging 82% in the 1980s and 78% in 1999-2000. At the 48 sites where size structure could be estimated for both periods, the proportion of Yellowstone cutthroat trout that were 10-20 cm long declined slightly (74% versus 66%), but the change was due entirely to the shift in size structure at the Teton River sites. The number of sites that contained rainbow trout O. mykiss or cutthroat trout 3 rainbow trout hybrids rose from 23 to 37, but the average proportion of the catch composed of rainbow trout and hybrids did not increase (7% in both the 1980s and 1999-2000). Although the distribution and abundance of Yellowstone cutthroat trout have been substantially reduced in Idaho over the last century, our results indicate that Yellowstone cutthroat trout abundance and size structure in Idaho have remained relatively stable at a large number of locations for the last 10-20 years. The expanding distribution of rainbow trout and hybrids in portions of the upper Snake River basin, however, calls for additional monitoring and active management actions.

  11. Automatic particle-size analysis of HTGR nuclear fuel microspheres

    International Nuclear Information System (INIS)

    Mack, J.E.

    1977-01-01

    An automatic particle-size analyzer (PSA) has been developed at ORNL for measuring and counting samples of nuclear fuel microspheres in the diameter range of 300 to 1000 μm at rates in excess of 2000 particles per minute, requiring no sample preparation. A light blockage technique is used in conjunction with a particle singularizer. Each particle in the sample is sized, and the information is accumulated by a multi-channel pulse height analyzer. The data are then transferred automatically to a computer for calculation of mean diameter, standard deviation, kurtosis, and skewness of the distribution. Entering the sample weight and pre-coating data permits calculation of particle density and the mean coating thickness and density. Following this nondestructive analysis, the sample is collected and returned to the process line or used for further analysis. The device has potential as an on-line quality control device in processes dealing with spherical or near-spherical particles where rapid analysis is required for process control

  12. Water quality monitoring: A comparative case study of municipal and Curtin Sarawak's lake samples

    Science.gov (United States)

    Anand Kumar, A.; Jaison, J.; Prabakaran, K.; Nagarajan, R.; Chan, Y. S.

    2016-03-01

    In this study, particle size distribution and zeta potential of the suspended particles in municipal water and lake surface water of Curtin Sarawak's lake were compared and the samples were analysed using dynamic light scattering method. High concentration of suspended particles affects the water quality as well as suppresses the aquatic photosynthetic systems. A new approach has been carried out in the current work to determine the particle size distribution and zeta potential of the suspended particles present in the water samples. The results for the lake samples showed that the particle size ranges from 180nm to 1345nm and the zeta potential values ranges from -8.58 mV to -26.1 mV. High zeta potential value was observed in the surface water samples of Curtin Sarawak's lake compared to the municipal water. The zeta potential values represent that the suspended particles are stable and chances of agglomeration is lower in lake water samples. Moreover, the effects of physico-chemical parameters on zeta potential of the water samples were also discussed.

  13. The effects of parameter estimation on minimizing the in-control average sample size for the double sampling X bar chart

    Directory of Open Access Journals (Sweden)

    Michael B.C. Khoo

    2013-11-01

    Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.

  14. How do low dispersal species establish large range sizes? The case of the water beetle Graphoderus bilineatus

    DEFF Research Database (Denmark)

    Iversen, Lars Lønsmann; Rannap, Riinu; Thomsen, Philip Francis

    2013-01-01

    important than species phylogeny or local spatial attributes. In this study we used the water beetle Graphoderus bilineatus a philopatric species of conservation concern in Europe as a model to explain large range size and to support effective conservation measures for such species that also have limited...... systems and wetlands which used to be highly connected throughout the central plains of Europe. Our data suggest that a broad habitat niche can prevent landscape elements from becoming barriers for species like G. bilineatus. Therefore, we question the usefulness of site protection as conservation...... measures for G. bilineatus and similar philopatric species. Instead, conservation actions should be focused at the landscape level to ensure a long-term viability of such species across their range....

  15. Scaling range sizes to threats for robust predictions of risks to biodiversity.

    Science.gov (United States)

    Keith, David A; Akçakaya, H Resit; Murray, Nicholas J

    2018-04-01

    Assessments of risk to biodiversity often rely on spatial distributions of species and ecosystems. Range-size metrics used extensively in these assessments, such as area of occupancy (AOO), are sensitive to measurement scale, prompting proposals to measure them at finer scales or at different scales based on the shape of the distribution or ecological characteristics of the biota. Despite its dominant role in red-list assessments for decades, appropriate spatial scales of AOO for predicting risks of species' extinction or ecosystem collapse remain untested and contentious. There are no quantitative evaluations of the scale-sensitivity of AOO as a predictor of risks, the relationship between optimal AOO scale and threat scale, or the effect of grid uncertainty. We used stochastic simulation models to explore risks to ecosystems and species with clustered, dispersed, and linear distribution patterns subject to regimes of threat events with different frequency and spatial extent. Area of occupancy was an accurate predictor of risk (0.81<|r|<0.98) and performed optimally when measured with grid cells 0.1-1.0 times the largest plausible area threatened by an event. Contrary to previous assertions, estimates of AOO at these relatively coarse scales were better predictors of risk than finer-scale estimates of AOO (e.g., when measurement cells are <1% of the area of the largest threat). The optimal scale depended on the spatial scales of threats more than the shape or size of biotic distributions. Although we found appreciable potential for grid-measurement errors, current IUCN guidelines for estimating AOO neutralize geometric uncertainty and incorporate effective scaling procedures for assessing risks posed by landscape-scale threats to species and ecosystems. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  16. Robust weak measurements on finite samples

    International Nuclear Information System (INIS)

    Tollaksen, Jeff

    2007-01-01

    A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime

  17. Characterization and source estimation of size-segregated aerosols during 2008-2012 in an urban environment in Beijing

    International Nuclear Information System (INIS)

    Yu, Lingda; Wang, Guangfu; Zhang, Renjiang

    2013-01-01

    Full text: During 2008-2012, size-segregated aerosol samples were collected using an eight-stage cascade impactor at Beijing Normal University (BNU) Site, China. These samples were analyzed using particle induced X-ray emission (PIXE) analysis for concentrations of 21 elements consisting of Mg, AI, Si, P, S, CI, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Ba and Pb. The size-resolved data sets were then analyzed using the positive matrix factorization (PMF) technique in order to identify possible sources and estimate their contribution to particulate matter mass. Nine sources were resolved in eight size ranges (025 ∼ 16μm) and included secondary sulphur, motor vehicles, coal combustion; oil combustion, road dust, biomass burning, soil dust, diesel vehicles and metal processing. PMF analysis of size-resolved source contributions showed that natural sources represented by soil dust and road dust contributed about 57% to the predicted primary particulate matter (PM) mass in the coarse size range(>2μm). On the other hand, anthropogenic sources such as secondary sulphur, coal and oil combustion, biomass burning and motor vehicle contributed about 73% in the fine size range <2μm). The diesel vehicles and secondary sulphur source contributed the most in the ultra-fine size range (<0.25μm) and was responsible for about 52% of the primary PM mass. (author)

  18. Characterization and source estimation of size-segregated aerosols during 2008-2012 in an urban environment in Beijing

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Lingda [Key Laboratory of Beam Technology and Materiais Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing (China); Wang, Guangfu, E-mail: guangfuw@bnu.edu.cn [Beijing Radiation Center, Beijing (China); Zhang, Renjiang [Key Laboratory of Regional Climate-Environment Research for Temperate Eas tAsia (RCE-TEA), Institute of Atmospheric Physics, Chinese Academy of Science, Beijing (China)

    2013-07-01

    Full text: During 2008-2012, size-segregated aerosol samples were collected using an eight-stage cascade impactor at Beijing Normal University (BNU) Site, China. These samples were analyzed using particle induced X-ray emission (PIXE) analysis for concentrations of 21 elements consisting of Mg, AI, Si, P, S, CI, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Ba and Pb. The size-resolved data sets were then analyzed using the positive matrix factorization (PMF) technique in order to identify possible sources and estimate their contribution to particulate matter mass. Nine sources were resolved in eight size ranges (025 ∼ 16μm) and included secondary sulphur, motor vehicles, coal combustion; oil combustion, road dust, biomass burning, soil dust, diesel vehicles and metal processing. PMF analysis of size-resolved source contributions showed that natural sources represented by soil dust and road dust contributed about 57% to the predicted primary particulate matter (PM) mass in the coarse size range(>2μm). On the other hand, anthropogenic sources such as secondary sulphur, coal and oil combustion, biomass burning and motor vehicle contributed about 73% in the fine size range <2μm). The diesel vehicles and secondary sulphur source contributed the most in the ultra-fine size range (<0.25μm) and was responsible for about 52% of the primary PM mass. (author)

  19. A quantitative study of particle size effects in the magnetorelaxometry of magnetic nanoparticles using atomic magnetometry

    Energy Technology Data Exchange (ETDEWEB)

    Dolgovskiy, V. [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Lebedev, V., E-mail: victor.lebedev@unifr.ch [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Colombo, S.; Weis, A. [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Michen, B.; Ackermann-Hirschi, L. [Adolphe Merkle Institute, University of Fribourg, CH-1700 Fribourg (Switzerland); Petri-Fink, A. [Adolphe Merkle Institute, University of Fribourg, CH-1700 Fribourg (Switzerland); Chemistry Department, University of Fribourg, CH-1700 Fribourg (Switzerland)

    2015-04-01

    The discrimination of immobilised superparamagnetic iron oxide nanoparticles (SPIONs) against SPIONs in fluid environments via their magnetic relaxation behaviour is a powerful tool for bio-medical imaging. Here we demonstrate that a gradiometer of laser-pumped atomic magnetometers can be used to record accurate time series of the relaxing magnetic field produced by pre-polarised SPIONs. We have investigated dry in vitro maghemite nanoparticle samples with different size distributions (average radii ranging from 14 to 21 nm) and analysed their relaxation using the Néel–Brown formalism. Fitting our model function to the magnetorelaxation (MRX) data allows us to extract the anisotropy constant K and the saturation magnetisation M{sub S} of each sample. While the latter was found not to depend on the particle size, we observe that K is inversely proportional to the (time- and size-) averaged volume of the magnetised particle fraction. We have identified the range of SPION sizes that are best suited for MRX detection considering our specific experimental conditions and sample preparation technique. - Highlights: • We studied magnetorelaxation of magnetic nanoparticles using atomic magnetometers. • We show that atomic magnetometers yield high precision MRX data. • The observed magnetorelaxation is well described by the moment superposition model. • Model fits allow extraction of nanoparticle material parameters of six samples. • All samples exhibit an unexpected size-dependent anisotropy constant.

  20. 'Mitominis': multiplex PCR analysis of reduced size amplicons for compound sequence analysis of the entire mtDNA control region in highly degraded samples.

    Science.gov (United States)

    Eichmann, Cordula; Parson, Walther

    2008-09-01

    The traditional protocol for forensic mitochondrial DNA (mtDNA) analyses involves the amplification and sequencing of the two hypervariable segments HVS-I and HVS-II of the mtDNA control region. The primers usually span fragment sizes of 300-400 bp each region, which may result in weak or failed amplification in highly degraded samples. Here we introduce an improved and more stable approach using shortened amplicons in the fragment range between 144 and 237 bp. Ten such amplicons were required to produce overlapping fragments that cover the entire human mtDNA control region. These were co-amplified in two multiplex polymerase chain reactions and sequenced with the individual amplification primers. The primers were carefully selected to minimize binding on homoplasic and haplogroup-specific sites that would otherwise result in loss of amplification due to mis-priming. The multiplexes have successfully been applied to ancient and forensic samples such as bones and teeth that showed a high degree of degradation.

  1. Dependence of fracture mechanical and fluid flow properties on fracture roughness and sample size

    International Nuclear Information System (INIS)

    Tsang, Y.W.; Witherspoon, P.A.

    1983-01-01

    A parameter study has been carried out to investigate the interdependence of mechanical and fluid flow properties of fractures with fracture roughness and sample size. A rough fracture can be defined mathematically in terms of its aperture density distribution. Correlations were found between the shapes of the aperture density distribution function and the specific fractures of the stress-strain behavior and fluid flow characteristics. Well-matched fractures had peaked aperture distributions that resulted in very nonlinear stress-strain behavior. With an increasing degree of mismatching between the top and bottom of a fracture, the aperture density distribution broadened and the nonlinearity of the stress-strain behavior became less accentuated. The different aperture density distributions also gave rise to qualitatively different fluid flow behavior. Findings from this investigation make it possible to estimate the stress-strain and fluid flow behavior when the roughness characteristics of the fracture are known and, conversely, to estimate the fracture roughness from an examination of the hydraulic and mechanical data. Results from this study showed that both the mechanical and hydraulic properties of the fracture are controlled by the large-scale roughness of the joint surface. This suggests that when the stress-flow behavior of a fracture is being investigated, the size of the rock sample should be larger than the typical wave length of the roughness undulations

  2. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    Science.gov (United States)

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  3. Crystallite size variation of TiO{sub 2} samples depending time heat treatment; Variacao do tamanho de cristalito de amostras de TiO{sub 2} em funcao do tempo de tratamento termico

    Energy Technology Data Exchange (ETDEWEB)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A., E-mail: amandagmgalante@gmail.com [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil). Departamento de Fisica e Quimica; Spada, E.R. [Universidade de Sao Paulo (USP), Ilha Solteira, SP (Brazil). Instituto de Fisica

    2016-07-01

    Titanium dioxide (TiO{sub 2}) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO{sub 2} powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  4. The influence of landscape characteristics and home-range size on the quantification of landscape-genetics relationships

    Science.gov (United States)

    Tabitha A. Graves; Tzeidle N. Wasserman; Milton Cezar Ribeiro; Erin L. Landguth; Stephen F. Spear; Niko Balkenhol; Colleen B. Higgins; Marie-Josee Fortin; Samuel A. Cushman; Lisette P. Waits

    2012-01-01

    A common approach used to estimate landscape resistance involves comparing correlations of ecological and genetic distances calculated among individuals of a species. However, the location of sampled individuals may contain some degree of spatial uncertainty due to the natural variation of animals moving through their home range ormeasurement error in plant or animal...

  5. Pollination and reproduction of an invasive plant inside and outside its ancestral range

    Science.gov (United States)

    Petanidou, Theodora; Price, Mary V.; Bronstein, Judith L.; Kantsa, Aphrodite; Tscheulin, Thomas; Kariyat, Rupesh; Krigas, Nikos; Mescher, Mark C.; De Moraes, Consuelo M.; Waser, Nickolas M.

    2018-05-01

    Comparing traits of invasive species within and beyond their ancestral range may improve our understanding of processes that promote aggressive spread. Solanum elaeagnifolium (silverleaf nightshade) is a noxious weed in its ancestral range in North America and is invasive on other continents. We compared investment in flowers and ovules, pollination success, and fruit and seed set in populations from Arizona, USA ("AZ") and Greece ("GR"). In both countries, the populations we sampled varied in size and types of present-day disturbance. Stature of plants increased with population size in AZ samples whereas GR plants were uniformly tall. Taller plants produced more flowers, and GR plants produced more flowers for a given stature and allocated more ovules per flower. Similar functional groups of native bees pollinated in AZ and GR populations, but visits to flowers decreased with population size and we observed no visits in the largest GR populations. As a result, plants in large GR populations were pollen-limited, and estimates of fecundity were lower on average in GR populations despite the larger allocation to flowers and ovules. These differences between plants in our AZ and GR populations suggest promising directions for further study. It would be useful to sample S. elaeagnifolium in Mediterranean climates within the ancestral range (e.g., in California, USA), to study asexual spread via rhizomes, and to use common gardens and genetic studies to explore the basis of variation in allocation patterns and of relationships between visitation and fruit set.

  6. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    International Nuclear Information System (INIS)

    Reiser, I; Lu, Z

    2014-01-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions included two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task

  7. (I Can’t Get No) Saturation: A Simulation and Guidelines for Minimum Sample Sizes in Qualitative Research

    NARCIS (Netherlands)

    van Rijnsoever, F.J.

    2015-01-01

    This paper explores the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the

  8. Size Effect Studies on Tensile Tests for Hot Stamping Steel

    Science.gov (United States)

    Chen, Xiaodu; Li, Yuanyuan; Han, Xianhong; Zhang, Junbo

    2018-02-01

    Tensile tests have been widely used to determine basic mechanical properties of materials. However, the properties measured may be related to geometrical factors of the tested samples especially for high-strength steels; this makes the properties' definitions and comparisons difficult. In this study, a series of tensile tests of ultra-high-strength hot-stamped steel were performed; the geometric shapes and sizes as well as the cutting direction were modified. The results demonstrate that the hot-stamped parts were isotropic and the cutting direction had no effect; the measured strengths were practically unrelated to the specimen geometries, including both size and shape. The elongations were slightly related to sample sizes within the studied range but highly depended on the sample shape, represented by the coefficient K. Such phenomena were analyzed and discussed based on microstructural observations and fracture morphologies. Moreover, two widely used elongation conversion equations, the Oliver formula and Barba's law, were introduced to verify their applicability, and a new interpolating function was developed and compared.

  9. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Science.gov (United States)

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  10. Sample Size of One: Operational Qualitative Analysis in the Classroom

    Directory of Open Access Journals (Sweden)

    John Hoven

    2015-10-01

    Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.

  11. Sizing for ethnicity in multi-cultural societies: development of size ...

    African Journals Online (AJOL)

    ... years, and fell in the size 6/10 to size 14/38 size range. The findings of the study suggest that young South African women of African descent with a triangular body shape may experience loose fit in the upper body of garments sized according to the size specifications currently used in the South African apparel industry.

  12. Carbon nanotube scaffolds with controlled porosity as electromagnetic absorbing materials in the gigahertz range

    Science.gov (United States)

    González, M.; Crespo, M.; Baselga, J.; Pozuelo, J.

    2016-05-01

    Control of the microscopic structure of CNT nanocomposites allows modulation of the electromagnetic shielding in the gigahertz range. The porosity of CNT scaffolds has been controlled by two freezing protocols and a subsequent lyophilization step: fast freezing in liquid nitrogen and slow freezing at -20 °C. Mercury porosimetry shows that slowly frozen specimens present a more open pore size (100-150 μm) with a narrow distribution whereas specimens frozen rapidly show a smaller pore size and a heterogeneous distribution. 3D-scaffolds containing 3, 4, 6 and 7% CNT were infiltrated with epoxy and specimens with 2, 5 and 8 mm thicknesses were characterized in the GHz range. Samples with the highest pore size and porosity presented the lowest reflected power (about 30%) and the highest absorbed power (about 70%), which allows considering them as electromagnetic radiation absorbing materials.Control of the microscopic structure of CNT nanocomposites allows modulation of the electromagnetic shielding in the gigahertz range. The porosity of CNT scaffolds has been controlled by two freezing protocols and a subsequent lyophilization step: fast freezing in liquid nitrogen and slow freezing at -20 °C. Mercury porosimetry shows that slowly frozen specimens present a more open pore size (100-150 μm) with a narrow distribution whereas specimens frozen rapidly show a smaller pore size and a heterogeneous distribution. 3D-scaffolds containing 3, 4, 6 and 7% CNT were infiltrated with epoxy and specimens with 2, 5 and 8 mm thicknesses were characterized in the GHz range. Samples with the highest pore size and porosity presented the lowest reflected power (about 30%) and the highest absorbed power (about 70%), which allows considering them as electromagnetic radiation absorbing materials. Electronic supplementary information (ESI) available: Scheme of hydrogenated derivative of diglycidyl ether of bisphenol-A (HDGEBA) and m-xylylenediamine; X-ray diffractograms of pristine CNT

  13. Crystallite size effects in stacking faulted nickel hydroxide and its electrochemical behaviour

    International Nuclear Information System (INIS)

    Ramesh, T.N.

    2009-01-01

    β-Nickel hydroxide comprises a long range periodic arrangement of atoms with a stacking sequence of AC AC AC-having an ideal composition Ni(OH) 2 . Variation in the preparative conditions can lead to the changes in the stacking sequence (AC AC BA CB AC AC or AC AC AB AC AC). This type of variation in stacking sequence can result in the formation of stacking fault in nickel hydroxide. The stability of the stacking fault depends on the free energy content of the sample. Stacking faults in nickel hydroxide is essential for better electrochemical activity. Also there are reports correlating particle size to the better electrochemical activity. Here we present the effect of crystallite size on the stacking faulted nickel hydroxide samples. The electrochemical performance of stacking faulted nickel hydroxide with small crystallite size exchanges 0.8e/Ni, while the samples with larger crystallite size exchange 0.4e/Ni. Hence a right combination of crystallite size and stacking fault content has to be controlled for good electrochemical activity of nickel hydroxide

  14. Self-navigation of a scanning tunneling microscope tip toward a micron-sized graphene sample.

    Science.gov (United States)

    Li, Guohong; Luican, Adina; Andrei, Eva Y

    2011-07-01

    We demonstrate a simple capacitance-based method to quickly and efficiently locate micron-sized conductive samples, such as graphene flakes, on insulating substrates in a scanning tunneling microscope (STM). By using edge recognition, the method is designed to locate and to identify small features when the STM tip is far above the surface, allowing for crash-free search and navigation. The method can be implemented in any STM environment, even at low temperatures and in strong magnetic field, with minimal or no hardware modifications.

  15. Size-selective separation of submicron particles in suspensions with ultrasonic atomization.

    Science.gov (United States)

    Nii, Susumu; Oka, Naoyoshi

    2014-11-01

    Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. A Systematic Review of Surgical Randomized Controlled Trials: Part 2. Funding Source, Conflict of Interest, and Sample Size in Plastic Surgery.

    Science.gov (United States)

    Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit

    2016-02-01

    The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.

  17. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  18. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    Science.gov (United States)

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  19. Broad-Range Bacterial Capture from Fluid-Samples: Implications for Amplification-Free Contamination Detection

    Directory of Open Access Journals (Sweden)

    Monika WEBER

    2016-08-01

    Full Text Available Fluid-Screen, Inc. presents a bacterial concentration and filtration method based on dielectrophoresis and alternating current kinetics. Dielectrophoresis has been previously shown to induce particle motion; however, bacterial capture efficiency and reproducibility have consistently been low, reducing its potential for practical applications. In this study, we introduce a novel, patent-pending electrode system optimized to simultaneously capture a wide range of bacterial species from a variety of aqueous solutions. Specifically, we show that the method of dielectrophoresis used induces responses in both characteristic Gram- negative Escherichia coli and Gram-positive Enterococcus faecalis bacteria, as well as with Bacillus subtilis and Aestuariimicrobium kwangyangense. We have adapted the electrode design to create a bacterial sample preparatio unit, termed the sample sorter, that is able to capture multiple bacterial species and release them simultaneously for bacterial concentration and exchange from complex matrices to defined buffer media. This technology can be used on its own or in conjunction with standard bacterial detection methods such as mass spectroscopy. The Fluid-Screen product will dramatically improve testing and identification of bacterial contaminants in various industrial settings by eliminating the need for amplification of samples and by reducing the time to identification.

  20. Sample size effect on the determination of the irreversibility line of high-Tc superconductors

    International Nuclear Information System (INIS)

    Li, Q.; Suenaga, M.; Li, Q.; Freltoft, T.

    1994-01-01

    The irreversibility lines of a high-J c superconducting Bi 2 Sr 2 Ca 2 Cu 3 O x /Ag tape were systematically measured upon a sequence of subdivisions of the sample. The irreversibility field H r (T) (parallel to the c axis) was found to change approximately as L 0.13 , where L is the effective dimension of the superconducting tape. Furthermore, it was found that the irreversibility line for a grain-aligned Bi 2 Sr 2 Ca 2 Cu 3 O x specimen can be approximately reproduced by the extrapolation of this relation down to a grain size of a few tens of micrometers. The observed size effect could significantly obscure the real physical meaning of the irreversibility lines. In addition, this finding surprisingly indicated that the Bi 2 Sr 2 Ca 2 Cu 2 O x /Ag tape and grain-aligned specimen may have similar flux line pinning strength

  1. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    Science.gov (United States)

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  3. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    OpenAIRE

    Heckmann, Mark; Burk, Lukas

    2017-01-01

    The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...

  4. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    DEFF Research Database (Denmark)

    Haugbøl, Steven; Pinborg, Lars H; Arfan, Haroon M

    2006-01-01

    PURPOSE: To determine the reproducibility of measurements of brain 5-HT2A receptors with an [18F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects...... reproducibility and the required sample size. METHODS: For assessment of the variability, six subjects were investigated with [18F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [18F]altanserin PET studies in 84 healthy subjects....... Regions of interest were automatically delineated on co-registered MR and PET images. RESULTS: In cortical brain regions with a high density of 5-HT2A receptors, the outcome parameter (binding potential, BP1) showed high reproducibility, with a median difference between the two group measurements of 6...

  5. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  6. Thermal conductivity of nanocrystalline silicon: importance of grain size and frequency-dependent mean free paths.

    Science.gov (United States)

    Wang, Zhaojie; Alaniz, Joseph E; Jang, Wanyoung; Garay, Javier E; Dames, Chris

    2011-06-08

    The thermal conductivity reduction due to grain boundary scattering is widely interpreted using a scattering length assumed equal to the grain size and independent of the phonon frequency (gray). To assess these assumptions and decouple the contributions of porosity and grain size, five samples of undoped nanocrystalline silicon have been measured with average grain sizes ranging from 550 to 64 nm and porosities from 17% to less than 1%, at temperatures from 310 to 16 K. The samples were prepared using current activated, pressure assisted densification (CAPAD). At low temperature the thermal conductivities of all samples show a T(2) dependence which cannot be explained by any traditional gray model. The measurements are explained over the entire temperature range by a new frequency-dependent model in which the mean free path for grain boundary scattering is inversely proportional to the phonon frequency, which is shown to be consistent with asymptotic analysis of atomistic simulations from the literature. In all cases the recommended boundary scattering length is smaller than the average grain size. These results should prove useful for the integration of nanocrystalline materials in devices such as advanced thermoelectrics.

  7. Seasonal and particle size-dependent variations in gas/particle partitioning of PCDD/Fs

    International Nuclear Information System (INIS)

    Lee, Se-Jin; Ale, Debaki; Chang, Yoon-Seok; Oh, Jeong-Eun; Shin, Sun Kyoung

    2008-01-01

    This study monitored particle size-dependent variations in atmospheric polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs). Two gas/particle partitioning models, the subcooled liquid vapor pressure (P L 0 ) and the octanol-air partition coefficient (K OA ) model, were applied to each particle sizes. The regression coefficients of each fraction against the gas/particle partition coefficient (K P ) were similar for separated particles within the same sample set but differed for particles collected during different periods. Gas/particle partitioning calculated from the integral of fractions was similar to that of size-segregated particles and previously measured bulk values. Despite the different behaviors and production mechanisms of atmospheric particles of different sizes, PCDD/F partitioning of each size range was controlled by meteorological conditions such as atmospheric temperature, O 3 and UV, which reflects no source related with certain particle size ranges but mixed urban sources within this city. Our observations emphasize that when assessing environmental and health effects, the movement of PCDD/Fs in air should be considered in conjunction with particle size in addition to the bulk aerosol. - Gas/particle partitioning of atmospheric PCDD/Fs for different particle sizes reflects the impacts of emitters of different size ranges

  8. Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples

    Directory of Open Access Journals (Sweden)

    Inés Lozano-Ramos

    2015-05-01

    Full Text Available Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9. The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm–Horsfall protein were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.

  9. Matching Ge detector element geometry to sample size and shape: One does not fit all exclamation point

    International Nuclear Information System (INIS)

    Keyser, R.M.; Twomey, T.R.; Sangsingkeow, P.

    1998-01-01

    For 25 yr, coaxial germanium detector performance has been specified using the methods and values specified in Ref. 1. These specifications are the full-width at half-maximum (FWHM), FW.1M, FW.02M, peak-to-Compton ratio, and relative efficiency. All of these measurements are made with a 60 Co source 25 cm from the cryostat endcap and centered on the axis of the detector. These measurements are easy to reproduce, both because they are simple to set up and use a common source. These standard tests have been useful in guiding the user to an appropriate detector choice for the intended measurement. Most users of germanium gamma-ray detectors do not make measurements in this simple geometry. Germanium detector manufacturers have worked over the years to make detectors with better resolution, better peak-to-Compton ratios, and higher efficiency--but all based on measurements using the IEEE standard. Advances in germanium crystal growth techniques have made it relatively easy to provide detector elements of different shapes and sizes. Many of these different shapes and sizes can give better results for a specific application than other shapes and sizes. But, the detector specifications must be changed to correspond to the actual application. Both the expected values and the actual parameters to be specified should be changed. In many cases, detection efficiency, peak shape, and minimum detectable limit for a particular detector/sample combination are valuable specifications of detector performance. For other situations, other parameters are important, such as peak shape as a function of count rate. In this work, different sample geometries were considered. The results show the variation in efficiency with energy for all of these sample and detector geometries. The point source at 25 cm from the endcap measurement allows the results to be compared with the currently given IEEE criteria. The best sample/detector configuration for a specific measurement requires more and

  10. Determination of concentration levels of arsenic, gold and antimony in particle-size fractions of gold ore using Neutron Activation Analysis

    International Nuclear Information System (INIS)

    Nyarku, M.

    2009-02-01

    Instrumental Neutron Activation Analysis (INAA) has been used to quantify the concentrations of arsenic, gold and antimony in particle-size fractions of a gold ore. The ore, which was taken from the Ahafo project site of Newmont Ghana Gold Ltd, was first fractionated into fourteen (14) particle-size fractions using state-of-the-art analytical sieve machine. The minimum sieve mesh size used was 36 microns and grains >2000 microns were not considered for analysis. Results of the sieving were analysed with easysieve software. The < 36 microns sub fraction was found to be the optimum, hosting bulk of all three elements. For arsenic, the element was found to be highly concentrated in < 36 to +100 microns size fractions and erratically distributed from +150 microns fraction and above. For gold, in exception of the sub fraction <36 which had exceptionally high concentration, the element is distributed in all the size fractions but slightly 'plays out' in the +150 to +400 microns fractions. Antimony occurrence in the sample was relatively high in <36 microns size fraction followed by 600 - 800, 800 - 1000, 400 - 600 and 36 - 40 microns size fractions in that order. Gold content in the sample was far higher than that of arsenic and antimony. Gold concentration in the composite sample was in the range 564 - 8420 ppm. Arsenic levels were higher as compared to antimony. The range of arsenic concentration in the composite sample was 14.33 - 186.92 ppm. Antimony concentration was in the range 1.09 - 9.48 ppm. (au)

  11. Synthesis and mechanical properties of silicon-doped TiAl-alloys with grain sizes in the submicron range; Herstellung und mechanische Eigenschaften silizidhaltiger TiAl-Werkstoffe mit Korngroessen im Submikronbereich

    Energy Technology Data Exchange (ETDEWEB)

    Bohn, R. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Werkstofforschung

    1999-07-01

    The objective of this study is to provide a comprehensive insight into the mechanical properties of nano- and submicron-grained intermetallics, containing ceramic particles as a second phase. The investigations are focussed on {gamma}-TiAl-based alloys with a fine dispersion of titanium silicides. The samples are prepared by high energy milling and subsequent hot isostatic pressing. The mechanical properties are mainly dominated by the grain size as the most important structural feature. At room temperature, the grain size dependence of hardness and yield strength can be described by the well-known Hall-Petch relationship. Contrary to the behavior of conventional alloys, the ductility of submicron-grained alloys drops if the grain size is further reduced. This may be attributed to the insignificance of diffusional creep at room temperature and to arising difficulties evolving for dislocation-based deformation mechanisms. In the high temperature range, the flow stress is strongly reduced. Superplastic deformation becomes feasible already at 800 C. The silicide particles impede grain growth, but they also promote cavitation during tensile straining. The mechanisms of deformation are similar to those established for coarse-grained materials at higher temperatures ({>=}1000 C). (orig.)

  12. Water quality monitoring: A comparative case study of municipal and Curtin Sarawak's lake samples

    International Nuclear Information System (INIS)

    Kumar, A Anand; Prabakaran, K; Nagarajan, R; Jaison, J; Chan, Y S

    2016-01-01

    In this study, particle size distribution and zeta potential of the suspended particles in municipal water and lake surface water of Curtin Sarawak's lake were compared and the samples were analysed using dynamic light scattering method. High concentration of suspended particles affects the water quality as well as suppresses the aquatic photosynthetic systems. A new approach has been carried out in the current work to determine the particle size distribution and zeta potential of the suspended particles present in the water samples. The results for the lake samples showed that the particle size ranges from 180nm to 1345nm and the zeta potential values ranges from -8.58 mV to -26.1 mV. High zeta potential value was observed in the surface water samples of Curtin Sarawak's lake compared to the municipal water. The zeta potential values represent that the suspended particles are stable and chances of agglomeration is lower in lake water samples. Moreover, the effects of physico-chemical parameters on zeta potential of the water samples were also discussed. (paper)

  13. Particle size distribution of dust collected from Alcator C-MOD

    International Nuclear Information System (INIS)

    Gorman, S.V.; Carmack, W.J.; Hembree, P.B.

    1998-01-01

    There are important safety issues associated with tokamak dust, accumulated primarily from sputtering and disruptions. The dust may contain tritium, it may be activated, chemically toxic, and chemically reactive. The purpose of this paper is to present results from analyses of particulate collected from the Alcator C-MOD tokamak located at Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. The sample obtained from C-MOD was not originally intended for examination outside of MIT. The sample was collected with the intent of performing only a composition analysis. However, MIT provided the INEEL with this sample for particle analysis. The sample was collected by vacuuming a section of the machine (covering approximately 1/3 of the machine surface) with a coarse fiber filter as the collection surface. The sample was then analyzed using an optical microscope, SEM microscope, Microtrac FRA particle size analyzer. The data fit a log-normal distribution. The count median diameter (CMD) of the samples ranged from 0.3 microm to 1.1 microm with geometric standard deviations (GSD) ranging from 2.8 to 5.2 and a mass median diameter (MMD) ranging from 7.22 to 176 microm

  14. Weighted piecewise LDA for solving the small sample size problem in face verification.

    Science.gov (United States)

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  15. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Science.gov (United States)

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  16. Size matters at deep-sea hydrothermal vents: different diversity and habitat fidelity patterns of meio- and macrofauna

    NARCIS (Netherlands)

    Gollner, S.; Govenar, B.; Fisher, C.R.; Bright, M.

    2015-01-01

    Species with markedly different sizes interact when sharing the same habitat. Unravelling mechanisms that control diversity thus requires consideration of a range of size classes. We compared patterns of diversity and community structure for meio- and macrofaunal communities sampled along a gradient

  17. Decision-making and sampling size effect

    OpenAIRE

    Ismariah Ahmad; Rohana Abd Rahman; Roda Jean-Marc; Lim Hin Fui; Mohd Parid Mamat

    2010-01-01

    Sound decision-making requires quality information. Poor information does not help in decision making. Among the sources of low quality information, an important cause is inadequate and inappropriate sampling. In this paper we illustrate the case of information collected on timber prices.

  18. Extraordinary tunable dynamic range of electrochemical aptasensor for accurate detection of ochratoxin A in food samples

    Directory of Open Access Journals (Sweden)

    Lin Cheng

    2017-06-01

    Full Text Available We report the design of a sensitive, electrochemical aptasensor for detection of ochratoxin A (OTA with an extraordinary tunable dynamic sensing range. This electrochemical aptasensor is constructed based on the target induced aptamer-folding detection mechanism and the recognition between OTA and its aptamers results in the conformational change of the aptamer probe and thus signal changes for measurement. The dynamic sensing range of the electrochemical aptasensor is successfully tuned by introduction of free assistant aptamer probes in the sensing system. Our electrochemical aptasensor shows an extraordinary dynamic sensing range of 11-order magnitude of OTA concentration from 10−8 to 102 ng/g. Of great significance, the signal response in all OTA concentration ranges is at the same current scale, demonstrating that our sensing protocol in this research could be applied for accurate detections of OTA in a broad range without using any complicated treatment of signal amplification. Finally, OTA spiked red wine and maize samples in different dynamic sensing ranges are determined with the electrochemical aptasensor under optimized sensing conditions. This tuning strategy of dynamic sensing range may offer a promising platform for electrochemical aptasensor optimizations in practical applications.

  19. A contemporary decennial global sample of changing agricultural field sizes

    Science.gov (United States)

    White, E.; Roy, D. P.

    2011-12-01

    In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.

  20. The effects of preparation, shipment and ageing on the Pu elemental assay results of milligram-sized samples

    International Nuclear Information System (INIS)

    Berger, J.; Doubek, N.; Jammet, G.; Aigner, H.; Bagliano, G.; Donohue, D.; Kuhn, E.

    1994-02-01

    Specialized procedures have been implemented for the sampling of Pu-containing materials such as Pu nitrate, oxide or mixed oxide in States which have not yet approved type B(U) shipment containers for the air-shipment of gram-sized quantities of Pu. In such cases, it it necessary to prepare samples for shipment which contain only milligram quantities of Pu dried from solution in penicillin vials. Potential problems due to flaking-off during shipment could affect the recovery of Pu at the analytical laboratory. Therefore, a series of tests was performed with synthetic Pu nitrated, and mixed U, Pu nitrated samples to test the effectiveness of the evaporation and recovery procedures. Results of these tests as well as experience with actual inspection samples are presented, showing conclusively that the existing procedures are satisfactory. (author). 11 refs, 6 figs, 8 tabs

  1. Analytical solutions to sampling effects in drop size distribution measurements during stationary rainfall: Estimation of bulk rainfall variables

    NARCIS (Netherlands)

    Uijlenhoet, R.; Porrà, J.M.; Sempere Torres, D.; Creutin, J.D.

    2006-01-01

    A stochastic model of the microstructure of rainfall is used to derive explicit expressions for the magnitude of the sampling fluctuations in rainfall properties estimated from raindrop size measurements in stationary rainfall. The model is a marked point process, in which the points represent the

  2. Investigating effects of sample pretreatment on protein stability using size-exclusion chromatography and high-resolution continuum source atomic absorption spectrometry.

    Science.gov (United States)

    Rakow, Tobias; El Deeb, Sami; Hahne, Thomas; El-Hady, Deia Abd; AlBishri, Hassan M; Wätzig, Hermann

    2014-09-01

    In this study, size-exclusion chromatography and high-resolution atomic absorption spectrometry methods have been developed and evaluated to test the stability of proteins during sample pretreatment. This especially includes different storage conditions but also adsorption before or even during the chromatographic process. For the development of the size exclusion method, a Biosep S3000 5 μm column was used for investigating a series of representative model proteins, namely bovine serum albumin, ovalbumin, monoclonal immunoglobulin G antibody, and myoglobin. Ambient temperature storage was found to be harmful to all model proteins, whereas short-term storage up to 14 days could be done in an ordinary refrigerator. Freezing the protein solutions was always complicated and had to be evaluated for each protein in the corresponding solvent. To keep the proteins in their native state a gentle freezing temperature should be chosen, hence liquid nitrogen should be avoided. Furthermore, a high-resolution continuum source atomic absorption spectrometry method was developed to observe the adsorption of proteins on container material and chromatographic columns. Adsorption to any container led to a sample loss and lowered the recovery rates. During the pretreatment and high-performance size-exclusion chromatography, adsorption caused sample losses of up to 33%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Real-time high dynamic range laser scanning microscopy

    Science.gov (United States)

    Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.

    2016-04-01

    In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.

  4. Overview of the Mars Sample Return Earth Entry Vehicle

    Science.gov (United States)

    Dillman, Robert; Corliss, James

    2008-01-01

    NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.

  5. Sample Size for Measuring Grammaticality in Preschool Children from Picture-Elicited Language Samples

    Science.gov (United States)

    Eisenberg, Sarita L.; Guo, Ling-Yu

    2015-01-01

    Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…

  6. The influence of sampling unit size and spatial arrangement patterns on neighborhood-based spatial structure analyses of forest stands

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.; Zhang, G.; Hui, G.; Li, Y.; Hu, Y.; Zhao, Z.

    2016-07-01

    Aim of study: Neighborhood-based stand spatial structure parameters can quantify and characterize forest spatial structure effectively. How these neighborhood-based structure parameters are influenced by the selection of different numbers of nearest-neighbor trees is unclear, and there is some disagreement in the literature regarding the appropriate number of nearest-neighbor trees to sample around reference trees. Understanding how to efficiently characterize forest structure is critical for forest management. Area of study: Multi-species uneven-aged forests of Northern China. Material and methods: We simulated stands with different spatial structural characteristics and systematically compared their structure parameters when two to eight neighboring trees were selected. Main results: Results showed that values of uniform angle index calculated in the same stand were different with different sizes of structure unit. When tree species and sizes were completely randomly interspersed, different numbers of neighbors had little influence on mingling and dominance indices. Changes of mingling or dominance indices caused by different numbers of neighbors occurred when the tree species or size classes were not randomly interspersed and their changing characteristics can be detected according to the spatial arrangement patterns of tree species and sizes. Research highlights: The number of neighboring trees selected for analyzing stand spatial structure parameters should be fixed. We proposed that the four-tree structure unit is the best compromise between sampling accuracy and costs for practical forest management. (Author)

  7. Prediction of size and position of fracture relevant defects of samples fatigued in the VHCF area on the basis of metallographic examinations; Vorhersage der Groesse und der Lage bruchrelevanter Defekte von im VHCF-Bereich ermuedeter Proben auf Basis metallographischer Untersuchungen

    Energy Technology Data Exchange (ETDEWEB)

    Christ, Hans-Juergen; Grigorescu, Andrei; Kolyshkin, Anton [Siegen Univ. (Germany). Inst. fuer Werkstofftechnik; Kaufmann, Edgar [Siegen Univ. (Germany). Dept. Mathematik; Zimmermann, Martina [TU Dresden (Germany). Inst. fuer Werkstoffwissenschaft

    2016-07-15

    This paper aims to examine the connection between the material quality with regard to the size and spatial distribution of the defects and the size or position of the defects causing the fracture which determine the durability of components in the range of Very High Cycle Fatigue (VHCF). For this purpose, the quality of the metastable austenitic steel 1.4301 was characterized via metallographic examinations. Longitudinal and cross sections were taken from a sheet steel. Afterwards size and position of all defects were measured. The metallographic information acquired was used to create a statistical defect distribution model. On the basis of this model and the stress distribution in the most stressed area of the used fatigue samples, the distribution of size and position of the inclusions relevant for the fatigue failure could be predicted. The results of the modelling are in good agreement with the experimental observations regarding the positions of crack initiation on samples failing under VHCF circumstances.

  8. Particle size of sediments collected from the bed of the Amazon River and its tributaries in June and July 1976

    Science.gov (United States)

    Nordin, Carl F.; Meade, R.H.; Mahoney, H.A.; Delany, B.M.

    1977-01-01

    Sixty-five samples of bed material were collected from the Amazon River and its major tributaries between Belem, Brazil, and Iquitos, Peru. Samples were taken with a standard BM-54 sampler, a pipe dredge, or a Helley-Smith bedload sampler. Most of the samples have median diameters in the size range of fine to medium sand and contain small percentages of fine gravel. Complete size distributions are tabulated.

  9. Vibro-spring particle size distribution analyser

    International Nuclear Information System (INIS)

    Patel, Ketan Shantilal

    2002-01-01

    This thesis describes the design and development of an automated pre-production particle size distribution analyser for particles in the 20 - 2000 μm size range. This work is follow up to the vibro-spring particle sizer reported by Shaeri. In its most basic form, the instrument comprises a horizontally held closed coil helical spring that is partly filled with the test powder and sinusoidally vibrated in the transverse direction. Particle size distribution data are obtained by stretching the spring to known lengths and measuring the mass of the powder discharged from the spring's coils. The size of the particles on the other hand is determined from the spring 'intercoil' distance. The instrument developed by Shaeri had limited use due to its inability to measure sample mass directly. For the device reported here, modifications are made to the original configurations to establish means of direct sample mass measurement. The feasibility of techniques for measuring the mass of powder retained within the spring are investigated in detail. Initially, the measurement of mass is executed in-situ from the vibration characteristics based on the spring's first harmonic resonant frequency. This method is often erratic and unreliable due to the particle-particle-spring wall interactions and the spring bending. An much more successful alternative is found from a more complicated arrangement in which the spring forms part of a stiff cantilever system pivoted along its main axis. Here, the sample mass is determined in the 'static mode' by monitoring the cantilever beam's deflection following the wanton termination of vibration. The system performance has been optimised through the variations of the mechanical design of the key components and the operating procedure as well as taking into account the effect of changes in the ambient temperature on the system's response. The thesis also describes the design and development of the ancillary mechanisms. These include the pneumatic

  10. Sampling and characterisation of groundwater colloids in ONKALO at Olkiluoto, Finland in 2007

    International Nuclear Information System (INIS)

    Takala, M.; Manninen, P.

    2008-08-01

    Colloid samples were collected from ONKALO groundwater station ONK-PVA1 in October 2007 and an additional sample was taken from groundwater station ONK-PVA3 in November 2007. The colloids were collected by filtering the groundwater on site with an Anopore 0.02 μm aluminium oxide filter. In the sampling in October, water samples were also collected to analyse the differences in the water chemistry before and after filtration. The water samples were freeze-dried so that the elements would be concentrated in the water. The colloid concentrations were determined by counting the particles from the SEM micrographs and by calculating the concentration using the micrograph area, the filter area and the filtered volume. The colloid concentration in ONK-PVA1 was very low. The particle concentration within the size range from 0.1 μm to 1 μm was 1.6 x 10 4 pt/L and the mass concentration within the same size range 0.001 μg/L. Owing to the very low concentration, an additional colloid sample was taken from ONK-PVA3. The colloid concentration in ONK-PVA3 within the size range from 0.1 μm to 1 μm was 8.2 x 10 7 pt/L and the mass concentration 0.013 mg/L. When studying the ONKALO groundwater monitoring data it was noticed that in the samples where the colloid concentration was elevated also the sodium fluorescein concentration was probably elevated. This indicated that process water (e.g. drilling water) was present in the water samples. The ONK-PVA1 water probably also contained process water during the colloid sampling performed in 2006. The composition of the colloid phase could not be determined by analysing the differences in the filtered and unfiltered water owing to the low colloid concentration. Furthermore, the aluminium oxide filter caused aluminium contamination. (orig.)

  11. Sample size and number of outcome measures of veterinary randomised controlled trials of pharmaceutical interventions funded by different sources, a cross-sectional study.

    Science.gov (United States)

    Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S

    2017-10-04

    Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.

  12. Evaluation of the size segregation of elemental carbon (EC) emission in Europe: Influence on the simulation of EC long-range transportation

    NARCIS (Netherlands)

    Chen, Y.; Cheng, Y.F.; Nordmann, S.; Birmili, W.; Denier Van Der Gon, H.A.C.; Ma, N.; Wolke, R.; Wehner, B.; Sun, J.; Spindler, G.; Mu, Q.; Pöschl, U.; Su, H.; Wiedensohler, A.

    2016-01-01

    Elemental Carbon (EC) has a significant impact on human health and climate change. In order to evaluate the size segregation of EC emission in the EUCAARI inventory and investigate its influence on the simulation of EC long-range transportation in Europe, we used the fully coupled online Weather

  13. Size dependence of the optical spectrum in nanocrystalline silver

    International Nuclear Information System (INIS)

    Taneja, Praveen; Ayyub, Pushan; Chandra, Ramesh

    2002-01-01

    We report a detailed study of the optical reflectance in sputter-deposited, nanocrystalline silver thin films in order to understand the marked changes in color that occur with decreasing particle size. In particular, samples with an average particle size in the 20 to 35 nm range are golden yellow, while those with a size smaller than 15 nm are black. We simulate the size dependence of the observed reflection spectra by incorporating Mie's theory of scattering and absorption of light in small particles, into the bulk dielectric constant formalism given by Ehrenreich and Philipp [Phys. Rev. 128, 1622 (1962)]. This provides a general method for understanding the reflected color of a dense collection of nanoparticles, such as in a nanocrystalline thin film. A deviation from Mie's theory is observed due to strong interparticle interactions

  14. The effect of kauri (Agathis australis) on grain size distribution and clay mineralogy of andesitic soils in the Waitakere Ranges, New Zealand

    NARCIS (Netherlands)

    Jongkind, A.G.; Buurman, P.

    2006-01-01

    Kauri (Agathis australis) is generally associated with intense podzolisation, but little research has been carried out to substantiate this. We studied soil profiles, grain size distribution patterns and clay mineralogy under kauri and broadleaf/tree fern vegetation in the Waitakere Ranges, North

  15. Variability of carotid artery measurements on 3-Tesla MRI and its impact on sample size calculation for clinical research.

    Science.gov (United States)

    Syed, Mushabbar A; Oshinski, John N; Kitchen, Charles; Ali, Arshad; Charnigo, Richard J; Quyyumi, Arshed A

    2009-08-01

    Carotid MRI measurements are increasingly being employed in research studies for atherosclerosis imaging. The majority of carotid imaging studies use 1.5 T MRI. Our objective was to investigate intra-observer and inter-observer variability in carotid measurements using high resolution 3 T MRI. We performed 3 T carotid MRI on 10 patients (age 56 +/- 8 years, 7 male) with atherosclerosis risk factors and ultrasound intima-media thickness > or =0.6 mm. A total of 20 transverse images of both right and left carotid arteries were acquired using T2 weighted black-blood sequence. The lumen and outer wall of the common carotid and internal carotid arteries were manually traced; vessel wall area, vessel wall volume, and average wall thickness measurements were then assessed for intra-observer and inter-observer variability. Pearson and intraclass correlations were used in these assessments, along with Bland-Altman plots. For inter-observer variability, Pearson correlations ranged from 0.936 to 0.996 and intraclass correlations from 0.927 to 0.991. For intra-observer variability, Pearson correlations ranged from 0.934 to 0.954 and intraclass correlations from 0.831 to 0.948. Calculations showed that inter-observer variability and other sources of error would inflate sample size requirements for a clinical trial by no more than 7.9%, indicating that 3 T MRI is nearly optimal in this respect. In patients with subclinical atherosclerosis, 3 T carotid MRI measurements are highly reproducible and have important implications for clinical trial design.

  16. Performance of diethylene glycol-based particle counters in the sub-3 nm size range

    CERN Document Server

    Wimmer, D; Franchin, A; Kangasluoma, J; Kreissl, F; Kürten, A; Kupc, A; Metzger, A; Mikkilä, J; Petäjä, J; Riccobono, F; Vanhanen, J; Kulmala, M; Curtius, J

    2013-01-01

    When studying new particle formation, the uncertainty in determining the "true" nucleation rate is considerably reduced when using condensation particle counters (CPCs) capable of measuring concentrations of aerosol particles at sizes close to or even at the critical cluster size (1–2 nm). Recently, CPCs able to reliably detect particles below 2 nm in size and even close to 1 nm became available. Using these instruments, the corrections needed for calculating nucleation rates are substantially reduced compared to scaling the observed formation rate to the nucleation rate at the critical cluster size. However, this improved instrumentation requires a careful characterization of their cut-off size and the shape of the detection efficiency curve because relatively small shifts in the cut-off size can translate into larger relative errors when measuring particles close to the cut-off size. Here we describe the development of two continuous-flow CPCs using diethylene glycol (DEG) as the working fluid. The desig...

  17. Automated combustion accelerator mass spectrometry for the analysis of biomedical samples in the low attomole range.

    Science.gov (United States)

    van Duijn, Esther; Sandman, Hugo; Grossouw, Dimitri; Mocking, Johannes A J; Coulier, Leon; Vaes, Wouter H J

    2014-08-05

    The increasing role of accelerator mass spectrometry (AMS) in biomedical research necessitates modernization of the traditional sample handling process. AMS was originally developed and used for carbon dating, therefore focusing on a very high precision but with a comparably low sample throughput. Here, we describe the combination of automated sample combustion with an elemental analyzer (EA) online coupled to an AMS via a dedicated interface. This setup allows direct radiocarbon measurements for over 70 samples daily by AMS. No sample processing is required apart from the pipetting of the sample into a tin foil cup, which is placed in the carousel of the EA. In our system, up to 200 AMS analyses are performed automatically without the need for manual interventions. We present results on the direct total (14)C count measurements in <2 μL human plasma samples. The method shows linearity over a range of 0.65-821 mBq/mL, with a lower limit of quantification of 0.65 mBq/mL (corresponding to 0.67 amol for acetaminophen). At these extremely low levels of activity, it becomes important to quantify plasma specific carbon percentages. This carbon percentage is automatically generated upon combustion of a sample on the EA. Apparent advantages of the present approach include complete omission of sample preparation (reduced hands-on time) and fully automated sample analysis. These improvements clearly stimulate the standard incorporation of microtracer research in the drug development process. In combination with the particularly low sample volumes required and extreme sensitivity, AMS strongly improves its position as a bioanalysis method.

  18. Grain Growth in Samples of Aluminum Containing Alumina Particles

    DEFF Research Database (Denmark)

    Tweed, C. J.; Hansen, Niels; Ralph, B.

    1983-01-01

    A study of the two-dimensional and three-dimensional grain size distributions before and after grain growth treatments has been made in samples having a range of oxide contents. In order to collect statistically useful amounts of data, an automatic image analyzer was used and the resulting data w...

  19. Pb isotope analysis of ng size samples by TIMS equipped with a 1013 Ω resistor using a 207Pb-204Pb double spike

    NARCIS (Netherlands)

    Klaver, M.; Smeets, R.J.; Koornneef, J.M.; Davies, G.R.; Vroon, P.Z.

    2016-01-01

    The use of the double spike technique to correct for instrumental mass fractionation has yielded high precision results for lead isotope measurements by thermal ionisation mass spectrometry (TIMS), but the applicability to ng size Pb samples is hampered by the small size of the

  20. Influence of grain size in the near-micrometre regime on the deformation microstructure in aluminium

    International Nuclear Information System (INIS)

    Le, G.M.; Godfrey, A.; Hansen, N.; Liu, W.; Winther, G.; Huang, X.

    2013-01-01

    The effect of grain size on deformation microstructure formation in the near-micrometre grain size regime has been studied using samples of aluminium prepared using a spark plasma sintering technique. Samples in a fully recrystallized grain condition with average grain sizes ranging from 5.2 to 0.8 μm have been prepared using this technique. Examination in the transmission electron microscope of these samples after compression at room temperature to approximately 20% reduction reveals that grains larger than 7 μm are subdivided by cell block boundaries similar to those observed in coarse-grained samples, with a similar dependency on the crystallographic orientation of the grains. With decreasing grain size down to approx. 1 μm there is a gradual transition from cell block structures to cell structures. At even smaller grain sizes of down to approx. 0.5 μm the dominant features are dislocation bundles and random dislocations, although at a larger compressive strain of 30% dislocation rotation boundaries may also be found in the interior of grains of this size. A standard 〈1 1 0〉 fibre texture is found for all grain sizes, with a decreasing sharpness with decreasing grain size. The structural transitions with decreasing grain size are discussed based on the general principles of grain subdivision by deformation-induced dislocation boundaries and of low-energy dislocation structures as applied to the not hitherto explored near-micrometre grain size regime