WorldWideScience

Sample records for sample size determination

  1. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  2. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  3. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  4. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  5. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  6. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  7. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  8. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  9. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  10. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  11. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  12. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  13. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  14. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  15. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  16. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  17. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  18. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  19. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  20. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  2. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  3. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  4. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  5. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  6. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  7. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  8. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  9. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  10. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  11. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  12. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  13. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  14. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  15. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    OpenAIRE

    Heckmann, Mark; Burk, Lukas

    2017-01-01

    The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...

  16. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  17. Towards traceable size determination of extracellular vesicles

    Directory of Open Access Journals (Sweden)

    Zoltán Varga

    2014-02-01

    Full Text Available Background: Extracellular vesicles (EVs have clinical importance due to their roles in a wide range of biological processes. The detection and characterization of EVs are challenging because of their small size, low refractive index, and heterogeneity. Methods: In this manuscript, the size distribution of an erythrocyte-derived EV sample is determined using state-of-the-art techniques such as nanoparticle tracking analysis, resistive pulse sensing, and electron microscopy, and novel techniques in the field, such as small-angle X-ray scattering (SAXS and size exclusion chromatography coupled with dynamic light scattering detection. Results: The mode values of the size distributions of the studied erythrocyte EVs reported by the different methods show only small deviations around 130 nm, but there are differences in the widths of the size distributions. Conclusion: SAXS is a promising technique with respect to traceability, as this technique was already applied for traceable size determination of solid nanoparticles in suspension. To reach the traceable measurement of EVs, monodisperse and highly concentrated samples are required.

  18. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  19. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  20. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  1. Multi-actinide analysis with AMS for ultra-trace determination and small sample sizes: advantages and drawbacks

    Energy Technology Data Exchange (ETDEWEB)

    Quinto, Francesca; Lagos, Markus; Plaschke, Markus; Schaefer, Thorsten; Geckeis, Horst [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology (Germany); Steier, Peter; Golser, Robin [VERA Laboratory, Faculty of Physics, University of Vienna (Austria)

    2016-07-01

    With the abundance sensitivities of AMS for U-236, Np-237 and Pu-239 relative to U-238 at levels lower than 1E-15, a simultaneous determination of several actinides without previous chemical separation from each other is possible. The actinides are extracted from the matrix elements via an iron hydroxide co-precipitation and the nuclides sequentially measured from the same sputter target. This simplified method allows for the use of non-isotopic tracers and consequently the determination of Np-237 and Am-243 for which isotopic tracers with the degree of purity required by ultra-trace mass-spectrometric analysis are not available. With detection limits of circa 1E+4 atoms in a sample, 1E+8 atoms are determined with circa 1 % relative uncertainty due to counting statistics. This allows for an unprecedented reduction of the sample size down to 100 ml of natural water. However, the use of non-isotopic tracers introduces a dominating uncertainty of up to 30 % related to the reproducibility of the results. The advantages and drawbacks of the novel method will be presented with the aid of recent results from the CFM Project at the Grimsel Test Site and from the investigation of global fallout in environmental samples.

  2. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety.

    Science.gov (United States)

    Kikuchi, Takashi; Gittins, John

    2009-08-15

    It is necessary for the calculation of sample size to achieve the best balance between the cost of a clinical trial and the possible benefits from a new treatment. Gittins and Pezeshk developed an innovative (behavioral Bayes) approach, which assumes that the number of users is an increasing function of the difference in performance between the new treatment and the standard treatment. The better a new treatment, the more the number of patients who want to switch to it. The optimal sample size is calculated in this framework. This BeBay approach takes account of three decision-makers, a pharmaceutical company, the health authority and medical advisers. Kikuchi, Pezeshk and Gittins generalized this approach by introducing a logistic benefit function, and by extending to the more usual unpaired case, and with unknown variance. The expected net benefit in this model is based on the efficacy of the new drug but does not take account of the incidence of adverse reactions. The present paper extends the model to include the costs of treating adverse reactions and focuses on societal cost-effectiveness as the criterion for determining sample size. The main application is likely to be to phase III clinical trials, for which the primary outcome is to compare the costs and benefits of a new drug with a standard drug in relation to national health-care. Copyright 2009 John Wiley & Sons, Ltd.

  3. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  4. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  5. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  6. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  7. Size determinations of plutonium colloids using autocorrelation photon spectroscopy

    International Nuclear Information System (INIS)

    Triay, I.R.; Rundberg, R.S.; Mitchell, A.J.; Ott, M.A.; Hobart, D.E.; Palmer, P.D.; Newton, T.W.; Thompson, J.L.

    1989-01-01

    Autocorrelation Photon Spectroscopy (APS) is a light-scattering technique utilized to determine the size distribution of colloidal suspensions. The capabilities of the APS methodology have been assessed by analyzing colloids of known sizes. Plutonium(IV) colloid samples were prepared by a variety of methods including: dilution; peptization; and alpha-induced auto-oxidation of Pu(III). The size of theses Pu colloids was analyzed using APS. The sizes determined for the Pu colloids studied varied from 1 to 370 nanometers. 7 refs., 5 figs., 3 tabs

  8. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  9. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  10. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  11. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    Science.gov (United States)

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  12. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  13. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  14. Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.

    Science.gov (United States)

    Power, Stephanie M; Matic, Damir B

    2013-03-01

    Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P  =  .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.

  15. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  16. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  17. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  18. Experimental-calculation technique for Ksub(IC) determination using the samples of decreased dimensions

    International Nuclear Information System (INIS)

    Vinokurov, V.A.; Dymshits, A.V.; Pirusskij, M.V.; Ovsyannikov, B.M.; Kononov, V.V.

    1981-01-01

    A possibility to decrease the size of samples, which is necessary for the reliable determination of fractUre toughness Ksub(1c), is established. The dependences of crack-resistance caracteristics on the sample dimensions are determined experimentally. The static bending tests are made using the 1251 model of ''Instron'' installation with a specially designed device. The samples of the 20KhNMF steel have been tested. It is shown that the Ksub(1c) value, determined for the samples with the largest netto cross section (50x100 rm), is considerably lower than Ksub(1c) values, determined for the samples with the decreased sizes. it is shown that the developed experimental-calculated method of Ksub(1c) determination can be practically used for the samples of the decreased sizes with the introduction of the corresponding amendment coefficient [ru

  19. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  20. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  2. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  3. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  4. Pore size determination from charged particle energy loss measurement

    International Nuclear Information System (INIS)

    Brady, F.P.; Armitage, B.H.

    1977-01-01

    A new method aimed at measuring porosity and mean pore size in materials has been developed at Harwell. The energy width or variance of a transmitted or backscattered charged particle beam is measured and related to the mean pore size via the assumption that the variance in total path length in the porous material is given by (Δx 2 )=na 2 , where n is the mean number of pores and a the mean pore size. It is shown on the basis of a general and rigorous theory of total path length distribution that this approximation can give rise to large errors in the mean pore size determination particularly in the case of large porosities (epsilon>0.5). In practice it is found that it is not easy to utilize fully the general theory because accurate measurements of the first four moments are required to determine the means and variances of the pore and inter-pore length distributions. Several models for these distributions are proposed. When these are incorporated in the general theory the determinations of mean pore size from experimental measurements on powder samples are in good agreement with values determined by other methods. (Auth.)

  5. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  6. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  7. Determination of subcellular compartment sizes for estimating dose variations in radiotherapy

    International Nuclear Information System (INIS)

    Poole, Christopher M.; Ahnesjo, Anders; Enger, Shirin A.

    2015-01-01

    The variation in specific energy absorbed to different cell compartments caused by variations in size and chemical composition is poorly investigated in radiotherapy. The aim of this study was to develop an algorithm to derive cell and cell nuclei size distributions from 2D histology samples, and build 3D cellular geometries to provide Monte Carlo (MC)-based dose calculation engines with a morphologically relevant input geometry. Stained and unstained regions of the histology samples are segmented using a Gaussian mixture model, and individual cell nuclei are identified via thresholding. Delaunay triangulation is applied to determine the distribution of distances between the centroids of nearest neighbour cells. A pouring simulation is used to build a 3D virtual tissue sample, with cell radii randomised according to the cell size distribution determined from the histology samples. A slice with the same thickness as the histology sample is cut through the 3D data and characterised in the same way as the measured histology. The comparison between this virtual slice and the measured histology is used to adjust the initial cell size distribution into the pouring simulation. This iterative approach of a pouring simulation with adjustments guided by comparison is continued until an input cell size distribution is found that yields a distribution in the sliced geometry that agrees with the measured histology samples. The thus obtained morphologically realistic 3D cellular geometry can be used as input to MC-based dose calculation programs for studies of dose response due to variations in morphology and size of tumour/healthy tissue cells/nuclei, and extracellular material. (authors)

  8. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  9. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  10. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  12. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  13. Determination of grain size by XRD profile analysis and TEM counting in nano-structured Cu

    International Nuclear Information System (INIS)

    Zhong Yong; Ping Dehai; Song Xiaoyan; Yin Fuxing

    2009-01-01

    In this work, a serial of pure copper sample with different grain sizes from nano- to micro-scale were prepared by sparkle plasma sintering (SPS) and following anneal treatment at 873 K and 1073 K, respectively. The grain size distributions of these samples were determined by both X-ray diffraction (XRD) profile analysis and transmission electronic microscope (TEM) micrograph counting. Although these two methods give similar distributions of grain size in the case of as-SPS sample with nano-scale grain size (around 10 nm), there are apparent discrepancies between the grain size distributions of the annealed samples obtained from XRD and TEM, especially for the sample annealed at 1073 K after SPS with micro-scale grain size (around 2 μm), which TEM counting provides much higher values of grain sizes than XRD analysis does. It indicates that for large grain-sized material, XRD analysis lost its validity for determination of grain size. It might be due to some small sized substructures possibly existed in even annealed (large grain-sized) samples, whereas there is no substructures in as-SPS (nanocrystalline) sample. Moreover, it has been found that the effective outer cut-off radius R e derived from XRD analysis coincides with the grain sizes given by TEM counting. The potential relationship between grain size and R e was discussed in the present work. These results might provide some new hints for deeper understanding of the physical meaning of XRD analysis and the parameters derived.

  14. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  15. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  16. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  17. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  18. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  19. What big size you have! Using effect sizes to determine the impact of public health nursing interventions.

    Science.gov (United States)

    Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A

    2013-01-01

    The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.

  20. Sex determination by tooth size in a sample of Greek population.

    Science.gov (United States)

    Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C

    2014-08-01

    Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.

  1. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  2. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  3. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  4. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  5. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  6. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  7. Determining wood chip size: image analysis and clustering methods

    Directory of Open Access Journals (Sweden)

    Paolo Febbi

    2013-09-01

    Full Text Available One of the standard methods for the determination of the size distribution of wood chips is the oscillating screen method (EN 15149- 1:2010. Recent literature demonstrated how image analysis could return highly accurate measure of the dimensions defined for each individual particle, and could promote a new method depending on the geometrical shape to determine the chip size in a more accurate way. A sample of wood chips (8 litres was sieved through horizontally oscillating sieves, using five different screen hole diameters (3.15, 8, 16, 45, 63 mm; the wood chips were sorted in decreasing size classes and the mass of all fractions was used to determine the size distribution of the particles. Since the chip shape and size influence the sieving results, Wang’s theory, which concerns the geometric forms, was considered. A cluster analysis on the shape descriptors (Fourier descriptors and size descriptors (area, perimeter, Feret diameters, eccentricity was applied to observe the chips distribution. The UPGMA algorithm was applied on Euclidean distance. The obtained dendrogram shows a group separation according with the original three sieving fractions. A comparison has been made between the traditional sieve and clustering results. This preliminary result shows how the image analysis-based method has a high potential for the characterization of wood chip size distribution and could be further investigated. Moreover, this method could be implemented in an online detection machine for chips size characterization. An improvement of the results is expected by using supervised multivariate methods that utilize known class memberships. The main objective of the future activities will be to shift the analysis from a 2-dimensional method to a 3- dimensional acquisition process.

  8. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  9. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  10. Determination of cluster size of Pratylenchus Penetrans ...

    African Journals Online (AJOL)

    A nursery field 21 m x 80 m was sampled sequentially for Pratylenchus penetrans by decreasing the plot sizes systematically. Plots sizes of 3.6 m x 8 m, 3.6 m x 3.6 m and 0.6 m x 0.6 m were sampled. Nematode counts were computed to obtain the respective sample mean and variance. The sample mean and variance ...

  11. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  12. Determination of Size Distributions in Nanocrystalline Powders by TEM, XRD and SAXS

    DEFF Research Database (Denmark)

    Jensen, Henrik; Pedersen, Jørgen Houe; Jørgensen, Jens Erik

    2006-01-01

    Crystallite size distributions and particle size distributions were determined by TEM, XRD, and SAXS for three commercially available TiO2 samples and one homemade. The theoretical Guinier Model was fitted to the experimental data and compared to analytical expressions. Modeling of the XRD spectra...... the size distribution obtained from the XRD experiments; however, a good agreement was obtained between the two techniques. Electron microscopy, SEM and TEM, confirmed the primary particle sizes, the size distributions, and the shapes obtained by XRD and SAXS. The SSEC78 powder and the commercially...

  13. Recommendations for plutonium colloid size determination

    International Nuclear Information System (INIS)

    Kosiewicz, S.T.

    1984-02-01

    This report presents recommendations for plutonium colloid size determination and summarizes a literature review, discussions with other researchers, and comments from equipment manufacturers. Four techniques suitable for plutonium colloid size characterization are filtration and ultrafiltration, gel permeation chromatography, diffusion methods, and high-pressure liquid chromatography (conditionally). Our findings include the following: (1) Filtration and ultrafiltration should be the first methods used for plutonium colloid size determination because they can provide the most rapid results with the least complicated experimental arrangement. (2) After expertise has been obtained with filtering, gel permeation chromatography should be incorporated into the colloid size determination program. (3) Diffusion methods can be used next. (4) High-pressure liquid chromatography will be suitable after appropriate columns are available. A plutonium colloid size characterization program with filtration/ultrafiltration and gel permeation chromatography has been initiated

  14. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  15. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  16. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  17. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  18. Sample size effect on the determination of the irreversibility line of high-Tc superconductors

    International Nuclear Information System (INIS)

    Li, Q.; Suenaga, M.; Li, Q.; Freltoft, T.

    1994-01-01

    The irreversibility lines of a high-J c superconducting Bi 2 Sr 2 Ca 2 Cu 3 O x /Ag tape were systematically measured upon a sequence of subdivisions of the sample. The irreversibility field H r (T) (parallel to the c axis) was found to change approximately as L 0.13 , where L is the effective dimension of the superconducting tape. Furthermore, it was found that the irreversibility line for a grain-aligned Bi 2 Sr 2 Ca 2 Cu 3 O x specimen can be approximately reproduced by the extrapolation of this relation down to a grain size of a few tens of micrometers. The observed size effect could significantly obscure the real physical meaning of the irreversibility lines. In addition, this finding surprisingly indicated that the Bi 2 Sr 2 Ca 2 Cu 2 O x /Ag tape and grain-aligned specimen may have similar flux line pinning strength

  19. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  20. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  1. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  3. Technique for determining training staff size

    International Nuclear Information System (INIS)

    Frye, S.R.

    1985-01-01

    Determining an adequate training staff size is a vital function of a training manager. Today's training requirements and standards have dictated a more stringent work load than ever before. A trainer's role is more than just providing classroom lectures. In most organizations the instructor must develop programs, lesson plans, exercise guides, objectives, test questions, etc. The tasks of a training organization are never ending and the appropriate resources must be determined and allotted to do the total job. A simple method exists for determining an adequate staff. Although not perfect, this method will provide a realistic approach for determining the needed training staff size. This method considers three major factors: instructional man-hours; non-instructional man-hours; and instructor availability. By determining and adding instructional man-hours and non-instructional man-hours a total man-hour distribution can be obtained. By dividing this by instructor availability a staff size can be determined

  4. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  5. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    Science.gov (United States)

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No

  6. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  7. Particle size determination

    International Nuclear Information System (INIS)

    Burr, K.J.

    1979-01-01

    A specification is given for an apparatus to provide a completely automatic testing cycle to determine the proportion of particles of less than a predetermined size in one of a number of fluid suspensions. Monitoring of the particle concentration during part of the process can be carried out by an x-ray source and detector. (U.K.)

  8. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  9. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  10. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  11. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  12. A simple technique to determine the size distribution of nuclear crater fallback and ejecta

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, II, Brooks D [U.S. Army Engineer Nuclear Cratering Group, Lawrence Radiation Laboratory, Livermore, CA (United States)

    1970-05-15

    This report describes the results of an investigation to find an economic method for determining the block size distribution of nuclear crater fallback and ejecta. It is shown that the modal analysis method of determining relative proportions can be applied with the use of a special sampling technique, to provide a size distribution curve for clastic materials similar to one obtainable by sieving and weighing the same materials.

  13. A simple technique to determine the size distribution of nuclear crater fallback and ejecta

    International Nuclear Information System (INIS)

    Anderson, Brooks D. II

    1970-01-01

    This report describes the results of an investigation to find an economic method for determining the block size distribution of nuclear crater fallback and ejecta. It is shown that the modal analysis method of determining relative proportions can be applied with the use of a special sampling technique, to provide a size distribution curve for clastic materials similar to one obtainable by sieving and weighing the same materials

  14. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  15. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  16. Size Determination of Au Aerosol Nanoparticles by Off-Line TEM/STEM Observations

    Science.gov (United States)

    Karlsson, Lisa S.; Deppert, Knut; Malm, Jan-Olle

    2006-12-01

    Determination of particle size distributions of Au aerosol nanoparticles has been performed by a TEM/STEM investigation. The particles are generated by an evaporation/condensation method and are size-selected by differential mobility analyzers (DMA) based on their electrical mobility. Off-line TEM measurements resulted in equivalent projected area diameters assuming that the particles are spherical in shape. In this paper critical factors such as magnification calibration, sampling, image analysis, beam exposure and, particle shape are treated. The study shows that the measures of central tendency; mean, median and mode, are equal as expected from a narrow size distribution. Moreover, the correlation between TEM/STEM and DMA are good, in practice 1:1. Also, STEM has the advantage over TEM due to enhanced contrast and is proposed as an alternative route for determination of particle size distributions of nanoparticles with lower contrast.

  17. Size Determination of Au Aerosol Nanoparticles by Off-Line TEM/STEM Observations

    International Nuclear Information System (INIS)

    Karlsson, Lisa S.; Deppert, Knut; Malm, Jan-Olle

    2006-01-01

    Determination of particle size distributions of Au aerosol nanoparticles has been performed by a TEM/STEM investigation. The particles are generated by an evaporation/condensation method and are size-selected by differential mobility analyzers (DMA) based on their electrical mobility. Off-line TEM measurements resulted in equivalent projected area diameters assuming that the particles are spherical in shape. In this paper critical factors such as magnification calibration, sampling, image analysis, beam exposure and, particle shape are treated. The study shows that the measures of central tendency; mean, median and mode, are equal as expected from a narrow size distribution. Moreover, the correlation between TEM/STEM and DMA are good, in practice 1:1. Also, STEM has the advantage over TEM due to enhanced contrast and is proposed as an alternative route for determination of particle size distributions of nanoparticles with lower contrast

  18. A ROBUST DETERMINATION OF THE SIZE OF QUASAR ACCRETION DISKS USING GRAVITATIONAL MICROLENSING

    International Nuclear Information System (INIS)

    Jiménez-Vicente, J.; Mediavilla, E.; Muñoz, J. A.; Kochanek, C. S.

    2012-01-01

    Using microlensing measurements for a sample of 27 image pairs of 19 lensed quasars we determine a maximum likelihood estimate for the accretion disk size of an average quasar of r s = 4.0 +2.4 –3.1 lt-day at rest frame (λ) = 1736 Å for microlenses with a mean mass of (M) = 0.3 M ☉ . This value, in good agreement with previous results from smaller samples, is roughly a factor of five greater than the predictions of the standard thin disk model. The individual size estimates for the 19 quasars in our sample are also in excellent agreement with the results of the joint maximum likelihood analysis.

  19. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  20. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  2. Influence of cervical preflaring on apical file size determination.

    Science.gov (United States)

    Pecora, J D; Capelli, A; Guerisoli, D M Z; Spanó, J C E; Estrela, C

    2005-07-01

    To investigate the influence of cervical preflaring with different instruments (Gates-Glidden drills, Quantec Flare series instruments and LA Axxess burs) on the first file that binds at working length (WL) in maxillary central incisors. Forty human maxillary central incisors with complete root formation were used. After standard access cavities, a size 06 K-file was inserted into each canal until the apical foramen was reached. The WL was set 1 mm short of the apical foramen. Group 1 received the initial apical instrument without previous preflaring of the cervical and middle thirds of the root canal. Group 2 had the cervical and middle portion of the root canals enlarged with Gates-Glidden drills sizes 90, 110 and 130. Group 3 had the cervical and middle thirds of the root canals enlarged with nickel-titanium Quantec Flare series instruments. Titanium-nitrite treated, stainless steel LA Axxess burs were used for preflaring the cervical and middle portions of root canals from group 4. Each canal was sized using manual K-files, starting with size 08 files with passive movements until the WL was reached. File sizes were increased until a binding sensation was felt at the WL, and the instrument size was recorded for each tooth. The apical region was then observed under a stereoscopic magnifier, images were recorded digitally and the differences between root canal and maximum file diameters were evaluated for each sample. Significant differences were found between experimental groups regarding anatomical diameter at the WL and the first file to bind in the canal (P Flare instruments were ranked in an intermediary position, with no statistically significant differences between them (0.093 mm average). The instrument binding technique for determining anatomical diameter at WL is not precise. Preflaring of the cervical and middle thirds of the root canal improved anatomical diameter determination; the instrument used for preflaring played a major role in determining the

  3. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  4. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  5. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  6. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. TESTING THE GRAIN-SIZE DISTRIBUTION DETERMINED BY LASER DIFFRACTOMETRY FOR SICILIAN SOILS

    Directory of Open Access Journals (Sweden)

    Costanza Di Stefano

    2012-06-01

    Full Text Available In this paper the soil grain-size distribution determined by Laser Diffraction method (LDM is tested using the Sieve-Hydrometer method (SHM applied for 747 soil samples representing a different texture classification, sampled in Sicily. 005_Di_Stefano(599_39 28-12-2011 15:01 Pagina 45 The analysis showed that the sand content measured by SHM can be assumed equal to the one determined by LDM. An underestimation of the clay fraction measured by LDM was obtained with respect to the SHM and a set of equations useful to refer laser diffraction measurements to SHM was calibrated using the measurements carried out for 635 soil samples. Finally, the proposed equations were tested using independent measurements carried out by LDM and SHM for 112 soil samples with a different texture classification.

  8. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  9. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  10. Synthesis of Uncarbonised Coconut Shell Nanoparticles: Characterisation and Particle Size Determination

    Directory of Open Access Journals (Sweden)

    S.A. Bello

    2015-06-01

    Full Text Available The possibility of using mechanical milling for the synthesis of uncarbonised coconut shell nanoparticles (UCSNPs has been investigated. UCSNPs were synthesized from discarded coconut shells (CSs using top down approach. The sundried CSs were crushed, ground and then sieved using hammer crusher, a two disc grinder and set of sieves with shine shaker respectively. The CS powders retained in the pan below 37 µm sized sieve were milled for 70 hours to obtain UCSNPS. Samples for analysis were taken at 16 and 70 hours. UCSNPs were analyzed using transmission electron microscope (TEM, scanning electron microscope (SEM with attached EDS and Gwyddion software. Samples of UCSNPs obtained at 16 and 70hours show that the deep brown colour of the initial CS powder became fading as the milling hour increased. The size determination from TEM image revealed spherical particles with an average size of 18.23 nm for UCSNPs obtained at 70 hour milling. The EDS spectrographs revealed an increase in the carbon counts with increased milling hours. This is attributable to dryness of the CS powders by the heat generated during the milling process due to absorption of kinetic energy by the CS powders from the milling balls. SEM micrographs revealed UCSNPs in agglomerated networks. The SEM micrograph/Gyweddion particles size determination showed average particles of 170.5 ±3 and 104.9 ±4.1 nm for UCSNPs obtained at 16 and 70 hours respectively. Therefore, production of UCSNPs through mechanical milling using mixture of ceramic balls of different sizes has been established especially when the particles of the sourced/initial CS powders falls below 37 µm.

  11. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  12. Direct determination of trace rare earth elements in ancient porcelain samples with slurry sampling electrothermal vaporization inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Xiang Guoqiang; Jiang Zucheng; He Man; Hu Bin

    2005-01-01

    A method for the direct determination of trace rare earth elements in ancient porcelain samples by slurry sampling fluorinating electrothermal vaporization inductively coupled plasma mass spectrometry was developed with the use of polytetrafluoroethylene as fluorinating reagent. It was found that Si, as a main matrix element in ancient porcelain sample, could be mostly removed at the ashing temperature of 1200 deg. C without considerable losses of the analytes. However, the chemical composition of ancient porcelain sample is very complicated, which makes the influences resulting from other matrix elements not be ignored. Therefore, the matrix effect of ancient porcelain sample was also investigated, and it was found that the matrix effect is obvious when the matrix concentration was larger than 0.8 g l -1 . The study results of particle size effect indicated that when the sample particle size was less than 0.057 mm, the particle size effect is negligible. Under the optimized operation conditions, the detection limits for rare earth elements by fluorinating electrothermal vaporization inductively coupled plasma mass spectrometry were 0.7 ng g -1 (Eu)-33.3 ng g -1 (Nd) with the precisions of 4.1% (Yb)-10% (La) (c = 1 μg l -1 , n = 9). The proposed method was used to directly determine the trace rare earth elements in ancient porcelain samples produced in different dynasty (Sui, Ming and Qing), and the analytical results are satisfactory

  13. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  14. Multi-Criteria Model for Determining Order Size

    Directory of Open Access Journals (Sweden)

    Katarzyna Jakowska-Suwalska

    2013-01-01

    Full Text Available A multi-criteria model for determining the order size for materials used in production has been presented. It was assumed that the consumption rate of each material is a random variable with a known probability distribution. Using such a model, in which the purchase cost of materials ordered is limited, three criteria were considered: order size, probability of a lack of materials in the production process, and deviations in the order size from the consumption rate in past periods. Based on an example, it has been shown how to use the model to determine the order sizes for polyurethane adhesive and wood in a hard-coal mine. (original abstract

  15. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  16. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  17. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  19. Sodium sampling and impurities determination

    International Nuclear Information System (INIS)

    Docekal, J.; Kovar, C.; Stuchlik, S.

    1980-01-01

    Samples may be obtained from tubes in-built in the sodium facility and further processed or they are taken into crucibles, stored and processed later. Another sampling method is a method involving vacuum distillation of sodium, thus concentrating impurities. Oxygen is determined by malgamation, distillation or vanadium balance methods. Hydrogen is determined by the metal diaphragm extraction, direct extraction or amalgamation methods. Carbon is determined using dry techniques involving burning a sodium sample at 1100 degC or using wet techniques by dissolving the sample with an acid. Trace amounts of metal impurities are determined after dissolving sodium in ethanol. The trace metals are concentrated and sodium excess is removed. (M.S.)

  20. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  1. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  3. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  4. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  5. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  6. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  7. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  8. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  9. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  10. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  11. Determination of 210Pb and 210Po in water samples

    International Nuclear Information System (INIS)

    Ayranov, M.; Tosheva, Z.; Kies, A.

    2004-01-01

    Lead-210 and Polonium-210 are naturally occurring members of the Uranium-238 decay series. They could be found in various environmental samples, such as groundwater, fish and shellfish, contributing an important component of the human natural radiation background. For this reason the development of a fast, reproducible and sensitive method for determination of 210 Pb and 210 Po is of a great concern. The aims of our study were to adopt procedures for radiochemical separation of these radionuclides and radioanalytical methods for their determination. The combination of electrochemical deposition, co-precipitation and extraction chromatography gives the opportunity for fast and effective radiochemical separation of the analytes. Polonium was spontaneously plated on copper disk from the stock solution. Lead was co-precipitated with Fe(OH) 3 and further purified by extraction chromatography on Sr Spec columns. Alpha spectra of polonium were collected on Canberra PIPS detectors with 900 mm 2 active surface. The activities of lead were determined by LSC (Gardian Wallac Oy). The minimum detectable activities for sample size 1000 mL and chemical yield of 88 % for the polonium and 85 % for the lead are presented. The proposed method proved to be fast, accurate and reproducible for routine determination of lead and polonium in environmental water samples. (authors)

  12. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  13. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Science.gov (United States)

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  14. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  15. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  16. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  17. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  18. Traceable size determination of nanoparticles, a comparison among European metrology institutes

    International Nuclear Information System (INIS)

    Meli, Felix; Klein, Tobias; Buhr, Egbert; Frase, Carl Georg; Gleber, Gudrun; Krumrey, Michael; Duta, Alexandru; Duta, Steluta; Korpelainen, Virpi; Bellotti, Roberto; Picotto, Gian Bartolo; Boyd, Robert D; Cuenat, Alexandre

    2012-01-01

    Within the European iMERA-Plus project ‘Traceable Characterisation of Nanoparticles’ various particle measurement procedures were developed and finally a measurement comparison for particle size was carried out among seven laboratories across six national metrology institutes. Seven high quality particle samples made from three different materials and having nominal sizes in the range from 10 to 200 nm were used. The participants applied five fundamentally different measurement methods, atomic force microscopy, dynamic light scattering (DLS), small-angle x-ray scattering, scanning electron microscopy and scanning electron microscopy in transmission mode, and provided a total of 48 independent, traceable results. The comparison reference values were determined as weighted means based on the estimated measurement uncertainties of the participants. The comparison reference values have combined standard uncertainties smaller than 1.4 nm for particles with sizes up to 100 nm. All methods, except DLS, provided consistent results. (paper)

  19. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  20. Does copepod size determine food consumption of particulate feeding fish?

    DEFF Research Database (Denmark)

    Deurs, Mikael van; Koski, Marja; Rindorf, Anna

    2014-01-01

    on adult particulate feeding fish is unknown. In the present study, we investigated the hypothesis that the availability of the large copepods determines food consumption and growth conditions of lesser sandeel (Ammodytes marinus) in the North Sea. Analysis of stomach content suggested that food...... consumption is higher for fish feeding on large copepods, and additional calculations revealed how handling time limitation may provide part of the explanation for this relationship. Comparing stomach data and zooplankton samples indicated that lesser sandeel actively target large copepods when......The climate-induced reduction in the mean copepod size, mainly driven by a decrease in the abundance of the large Calanus finmarchicus around 1987, has been linked to the low survival of fish larvae in the North Sea. However, to what extent this sort of reduction in copepod size has any influence...

  1. Determinants of salivary evening alpha-amylase in a large sample free of psychopathology

    NARCIS (Netherlands)

    Veen, Gerthe; Giltay, Erik J.; Vreeburg, Sophie A.; Licht, Carmilla M. M.; Cobbaert, Christa M.; Zitman, Frans G.; Penninx, Brenda W. J. H.

    Objective: Recently, salivary alpha-amylase (sAA) has been proposed as a suitable index for sympathetic activity and dysregulation of the autonomic nervous system (ANS). Although determinants of sAA have been described, they have not been studied within the same study with a large sample size

  2. Influence of secular trends and sample size on reference equations for lung function tests.

    Science.gov (United States)

    Quanjer, P H; Stocks, J; Cole, T J; Hall, G L; Stanojevic, S

    2011-03-01

    The aim of our study was to determine the contribution of secular trends and sample size to lung function reference equations, and establish the number of local subjects required to validate published reference values. 30 spirometry datasets collected between 1978 and 2009 provided data on healthy, white subjects: 19,291 males and 23,741 females aged 2.5-95 yrs. The best fit for forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC as functions of age, height and sex were derived from the entire dataset using GAMLSS. Mean z-scores were calculated for individual datasets to determine inter-centre differences. This was repeated by subdividing one large dataset (3,683 males and 4,759 females) into 36 smaller subsets (comprising 18-227 individuals) to preclude differences due to population/technique. No secular trends were observed and differences between datasets comprising >1,000 subjects were small (maximum difference in FEV(1) and FVC from overall mean: 0.30- -0.22 z-scores). Subdividing one large dataset into smaller subsets reproduced the above sample size-related differences and revealed that at least 150 males and 150 females would be necessary to validate reference values to avoid spurious differences due to sampling error. Use of local controls to validate reference equations will rarely be practical due to the numbers required. Reference equations derived from large or collated datasets are recommended.

  3. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  4. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    Science.gov (United States)

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than

  5. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  7. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  8. Molecular sizes of lichen ice nucleation sites determined by gamma radiation inactivation analysis

    International Nuclear Information System (INIS)

    Kieft, T.L.; Ruscetti, T.

    1992-01-01

    It has previously been shown that some species of lichen fungi contain proteinaceous ice nuclei which are active at temperatures as warm as −2 °C. This experiment was undertaken to determine the molecular sizes of ice nuclei in the lichen fungus Rhizoplaca chrysoleuca and to compare them to bacterial ice nuclei from Pseudomonas syringae. Gamma radiation inactivation analysis was used to determine molecular weights. Radiation inactivation analysis is based on target theory, which states that the likelihood of a molecule being inactivated by gamma rays increases as its size increases. Three different sources of ice nuclei from the lichen R. chrysoleuca were tested: field-collected lichens, extract of lichen fungus, and a pure culture of the fungus R. chrysoleuca. P. syringae strain Cit7 was used as a source of bacterial ice nuclei. Samples were lyophilized, irradiated with gamma doses ranging from 0 to 10.4 Mrads, and then tested for ice nucleation activity using a droplet-freezing assay. Data for all four types of samples were in rough agreement; sizes of nucleation sites increased logarithmically with increasing temperatures of ice nucleation activity. Molecular weights of nucleation sites active between −3 and −4 °C from the bacteria and from the field-collected lichens were approximately 1.0 × 10 6 Da. Nuclei from the lichen fungus and in the lichen extract appeared to be slightly smaller but followed the same log-normal pattern with temperature of ice nucleation activity. The data for both the bacterial and lichen ice nuclei are in agreement with ice nucleation theory which states that the size of ice nucleation sites increases logarithmically as the temperature of nucleation increases linearly. This suggests that although some differences exist between bacterial and lichen ice nucleation sites, their molecular sizes are quite similar

  9. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  10. Accuracy of computed tomography in determining pancreatic cancer tumor size

    International Nuclear Information System (INIS)

    Aoki, Kazunori; Okada, Shuichi; Moriyama, Noriyuki

    1994-01-01

    We compared tumor sizes determined by computed tomography (CT) with those of the resected specimens in 26 patients with pancreatic cancer in order to clarify whether or not the size of a pancreatic tumor can be accurately determined by CT. From the precontrast, postcontrast and arterial dominant phases of dynamic CT, the arterial dominant phase was found to yield the highest correlation between CT measured tumor size and that of the resected specimens (p<0.01). The correlation coefficient was, however, not high (r=0.67). CT alone may therefore be insufficient to determine tumor size in pancreatic cancer accurately. (author)

  11. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  12. Role of NAA in determination and characterisation of sampling behaviours of multiple elements in CRMs

    International Nuclear Information System (INIS)

    Tian Weizhi; Ni Bangfa; Wang Pingsheng; Nie Huiling

    2002-01-01

    Taking the advantage of high precision and accuracy of neutron activation analysis (NAA), sampling constants have been determined for multielements in several international and Chinese reference materials. The suggested technique may be used for finding elements in existing CRMs qualified for quality control (QC) of small size samples (several mg or less), and characterizing sampling behaviors of multielements in new CRMs specifically made for QC of microanalysis. (author)

  13. Determination of particle size distribution of salt crystals in aqueous slurries

    International Nuclear Information System (INIS)

    Miller, A.G.

    1977-10-01

    A method for determining particle size distribution of water-soluble crystals in aqueous slurries is described. The salt slurries, containing sodium salts of predominantly nitrate, but also nitrite, sulfate, phosphate, aluminates, carbonate, and hydroxide, occur in radioactive, concentrated chemical waste from the reprocessing of nuclear fuel elements. The method involves separating the crystals from the aqueous phase, drying them, and then dispersing the crystals in a nonaqueous medium based on nitroethane. Ultrasonic treatment is important in dispersing the sample into its fundamental crystals. The dispersed crystals are sieved into appropriate size ranges for counting with a HIAC brand particle counter. A preponderance of very fine particles in a slurry was found to increase the difficulty of effecting complete dispersion of the crystals because of the tendency to retain traces of aqueous mother liquor. Traces of moisture produce agglomerates of crystals, the extent of agglomeration being dependent on the amount of moisture present. The procedure is applicable to particles within the 2 to 600 μm size range of the HIAC particle counter. The procedure provides an effective means for measuring particle size distribution of crystals in aqueous salt slurries even when most crystals are less than 10 μm in size. 19 figures

  14. Determination of Flaw Size from Thermographic Data

    Science.gov (United States)

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-01-01

    Conventional methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the flaw. Since the heat diffuses in the plane parallel to the surface, the resulting temperature profile over the flaw is larger than the flaw. A variational method is presented for reducing the thermographic data to produce an estimated size for the flaw that is much closer to the true size of the flaw. The size is determined from the spatial thermal response of the exterior surface above the flaw and a constraint on the length of the contour surrounding the flaw. The technique is applied to experimental data acquired on a flat bottom hole composite specimen.

  15. Measurements of Plutonium and Americium in Soil Samples from Project 57 using the Suspended Soil Particle Sizing System (SSPSS)

    International Nuclear Information System (INIS)

    John L. Bowen; Rowena Gonzalez; David S. Shafer

    2001-01-01

    As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site

  16. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  17. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  18. Platelet size and age determine platelet function independently

    International Nuclear Information System (INIS)

    Thompson, C.B.; Jakubowski, J.A.; Quinn, P.G.; Deykin, D.; Valeri, C.R.

    1984-01-01

    A study was undertaken to examine the interaction of platelet size and age in determining in vitro platelet function. Baboon megakaryocytes were labeled in vivo by the injection of 75Se-methionine. Blood was collected when the label was predominantly associated with younger platelets (day 2) and with older platelets (day 9). Size-dependent platelet subpopulations were prepared on both days by counterflow centrifugation. The reactivity of each platelet subpopulation was determined on both days by measuring thrombin-induced aggregation. Platelets were fixed after partial aggregation had occurred by the addition of EDTA/formalin. After removal of the aggregated platelets by differential centrifugation, the supernatant medium was assayed for remaining platelets and 75Se radioactivity. Comparing day 2 and day 9, no significant difference was seen in the rate of aggregation of a given subpopulation. However, aggregation was more rapid in the larger platelet fractions than in the smaller ones on both days. A greater percentage of the 75Se radioactivity appeared in the platelet aggregates on day 2 than on day 9. This effect was independent of platelet size, as it occurred to a similar extent in the unfractionated platelets and in each of the size-dependent platelet subpopulations. The data indicate that young platelets are more active than older platelets. This study demonstrates that size and age are both determinants of platelet function, but by independent mechanisms

  19. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  20. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  1. EXPERIMENTS TOWARDS DETERMINING BEST TRAINING SAMPLE SIZE FOR AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS THROUGH SEQUENTIAL MINIMAL OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Sunil Kumar C

    2014-01-01

    Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.

  2. Arecibo Radar Observation of Near-Earth Asteroids: Expanded Sample Size, Determination of Radar Albedos, and Measurements of Polarization Ratios

    Science.gov (United States)

    Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.

    2017-10-01

    The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we

  3. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  5. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  6. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  7. Lipoplex size determines lipofection efficiency with or without serum.

    Science.gov (United States)

    Almofti, Mohamad Radwan; Harashima, Hideyoshi; Shinohara, Yasuo; Almofti, Ammar; Li, Wenhao; Kiwada, Hiroshi

    2003-01-01

    In order to identify factors affecting cationic liposome-mediated gene transfer, the relationships were examined among cationic liposome/DNA complex (lipoplex)-cell interactions, lipoplex size and lipoplex-mediated transfection (lipofection) efficiency. It was found that lipofection efficiency was determined mainly by lipoplex size, but not by the extent of lipoplex-cell interactions including binding, uptake or fusion. In addition, it was found that serum affected mainly lipoplex size, but not lipoplex-cell interactions, which effect was the major reason behind the inhibitory effect of serum on lipofection efficiency. It was concluded that, in the presence or absence of serum, lipoplex size is a major factor determining lipofection efficiency. Moreover, in the presence or absence of serum, lipoplex size was found to affect lipofection efficiency by controlling the size of the intracellular vesicles containing lipoplexes after internalization, but not by affecting lipoplex-cell interactions. In addition, large lipoplex particles showed, in general, higher lipofection efficiency than small particles. These results imply that, by controlling lipoplex size, an efficient lipid delivery system may be achieved for in vitro and in vivo gene therapy.

  8. Comparison of photon correlation spectroscopy with photosedimentation analysis for the determination of aqueous colloid size distributions

    Science.gov (United States)

    Rees, Terry F.

    1990-01-01

    Colloidal materials, dispersed phases with dimensions between 0.001 and 1 μm, are potential transport media for a variety of contaminants in surface and ground water. Characterization of these colloids, and identification of the parameters that control their movement, are necessary before transport simulations can be attempted. Two techniques that can be used to determine the particle-size distribution of colloidal materials suspended in natural waters are compared. Photon correlation Spectroscopy (PCS) utilizes the Doppler frequency shift of photons scattered off particles undergoing Brownian motion to determine the size of colloids suspended in water. Photosedimentation analysis (PSA) measures the time-dependent change in optical density of a suspension of colloidal particles undergoing centrifugation. A description of both techniques, important underlying assumptions, and limitations are given. Results for a series of river water samples show that the colloid-size distribution means are statistically identical as determined by both techniques. This also is true of the mass median diameter (MMD), even though MMD values determined by PSA are consistently smaller than those determined by PCS. Because of this small negative bias, the skew parameters for the distributions are generally smaller for the PCS-determined distributions than for the PSA-determined distributions. Smaller polydispersity indices for the distributions are also determined by PCS.

  9. Determinants of capital structure in small and medium sized enterprises in Malaysia

    OpenAIRE

    Mat Nawi, Hafizah

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London This study aims to investigate the determinants of capital structure in small and medium-sized enterprises (SMEs) in Malaysia and their effect on firms’ performance. The study addresses the following primary question: What are the factors that influence the capital structure of SMEs in Malaysia? The sample of this research is SMEs in the east coast region of Malaysia. Adopting a posi...

  10. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  11. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  12. Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions

    International Nuclear Information System (INIS)

    John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.

    2000-01-01

    Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was

  13. Association studies and legume synteny reveal haplotypes determining seed size in Vigna unguiculata

    Directory of Open Access Journals (Sweden)

    Mitchell R Lucas

    2013-04-01

    Full Text Available Highly specific seed market classes for cowpea and other grain legumes exists because grain is most commonly cooked and consumed whole. Size, shape, color, and texture are critical features of these market classes and breeders target development of cultivars for market acceptance. Resistance to biotic and abiotic stresses that are absent from elite breeding material are often introgressed through crosses to landraces or wild relatives. When crosses are made between parents with different grain quality characteristics, recovery of progeny with acceptable or enhanced grain quality is problematic. Thus genetic markers for grain quality traits can help in pyramiding genes needed for specific market classes. Allelic variation dictating the inheritance of seed size can be tagged and used to assist the selection of large-seeded lines. In this work we applied SNP genotyping and knowledge of legume synteny to characterize regions of the cowpea genome associated with seed size. These marker-trait associations will enable breeders to use marker based selection approaches to increase the frequency of progeny with large seed. For ~800 samples derived from eight bi-parental populations, QTL analysis was used to identify markers linked to ten trait determinants. In addition, the population structure of 171 samples from the USDA core collection was identified and incorporated into a genome-wide association study which supported more than half of the trait-associated regions important in the bi-parental populations. Seven of the total ten QTL were supported based on synteny to seed size associated regions identified in the related legume soybean. In addition to delivering markers linked to major trait determinants in the context of modern breeding, we provide an analysis of the diversity of the USDA core collection of cowpea to identify genepools, migrants, admixture, and duplicates.

  14. How much motion is too much motion? Determining motion thresholds by sample size for reproducibility in developmental resting-state MRI

    Directory of Open Access Journals (Sweden)

    Julia Leonard

    2017-03-01

    Full Text Available A constant problem developmental neuroimagers face is in-scanner head motion. Children move more than adults and this has led to concerns that developmental changes in resting-state connectivity measures may be artefactual. Furthermore, children are challenging to recruit into studies and therefore researchers have tended to take a permissive stance when setting exclusion criteria on head motion. The literature is not clear regarding our central question: How much motion is too much? Here, we systematically examine the effects of multiple motion exclusion criteria at different sample sizes and age ranges in a large openly available developmental cohort (ABIDE; http://preprocessed-connectomes-project.org/abide. We checked 1 the reliability of resting-state functional magnetic resonance imaging (rs-fMRI pairwise connectivity measures across the brain and 2 the accuracy with which we can separate participants with autism spectrum disorder from typically developing controls based on their rs-fMRI scans using machine learning. We find that reliability on average is primarily sensitive to the number of participants considered, but that increasingly permissive motion thresholds lower case-control prediction accuracy for all sample sizes.

  15. EDXRF applied to the chemical element determination of small invertebrate samples

    International Nuclear Information System (INIS)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de

    2015-01-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E n Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  16. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  17. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  18. Ultrasonic determination of the size of defects

    International Nuclear Information System (INIS)

    Zetterwall, T.

    1989-01-01

    The paper presents results from a study of ultrasonic testing of materials. The main topic has been the determination of the size, length and deep, of cracks or defects in stainless steel plates. (K.A.E)

  19. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  20. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  1. Size-exclusion chromatography for the determination of the boiling point distribution of high-boiling petroleum fractions.

    Science.gov (United States)

    Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian

    2015-03-01

    The paper describes a new procedure for the determination of boiling point distribution of high-boiling petroleum fractions using size-exclusion chromatography with refractive index detection. Thus far, the determination of boiling range distribution by chromatography has been accomplished using simulated distillation with gas chromatography with flame ionization detection. This study revealed that in spite of substantial differences in the separation mechanism and the detection mode, the size-exclusion chromatography technique yields similar results for the determination of boiling point distribution compared with simulated distillation and novel empty column gas chromatography. The developed procedure using size-exclusion chromatography has a substantial applicability, especially for the determination of exact final boiling point values for high-boiling mixtures, for which a standard high-temperature simulated distillation would have to be used. In this case, the precision of final boiling point determination is low due to the high final temperatures of the gas chromatograph oven and an insufficient thermal stability of both the gas chromatography stationary phase and the sample. Additionally, the use of high-performance liquid chromatography detectors more sensitive than refractive index detection allows a lower detection limit for high-molar-mass aromatic compounds, and thus increases the sensitivity of final boiling point determination. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. One-sample determination of glomerular filtration rate (GFR) in children. An evaluation based on 75 consecutive patients

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Lütken; Kanstrup, Inge-Lis; Henriksen, Jens Henrik Sahl

    2013-01-01

    the plasma radioactivity curve. The one-sample clearance was determined from a single plasma sample collected at 60, 90 or 120 min after injection according to the one-pool method. Results. The overall accuracy of one-sample clearance was excellent with mean numeric difference to the reference value of 0.......7-1.7 mL/min. In 64 children, the one-sample clearance was within ± 4 mL/min of the multiple-sample value. However, in 11 children the numeric difference exceeded 4 mL/min (4.4-19.5). Analysis of age, body size, distribution volume, indicator retention time, clearance level, curve fitting, and sampling...... fraction (15%) larger discrepancies are found. If an accurate clearance value is essential a multiple-sample determination should be performed....

  3. Influence of particle size of wear metal on the spectrometric oil analysis programme (SOAP), demonstrated by the determination of iron by AAS

    Energy Technology Data Exchange (ETDEWEB)

    Klaegler, S.H.; Jantzen, E.

    1982-02-01

    The possibility that there might be a relation between particle size of wear metal and spectrometric determination, (e.g. of the iron content in used lubricating oils) has been examined. In this connection it had to be clarified from which particle size of the iron wear the Fe content determined by direct AAS (solution of the oil sample) is in agreement with the true value in the used oil. The determination of the absolute iron content was performed by a colorimetric method preceded by an incineration of the used oil. Contrary to other publications, in which work is based on spherical iron particles as a simulated wear, the test described here relates to true wear particles. To obtain the total iron wear from a gear oil it was filtered off from the used oil and afterwards separated into defined particle size ranges by a procedure specially developed for this purpose. The different groups of scaly particles, which were collected in this way, were then mixed homogeneously into fresh luboil samples according to their sizes. The determination of the iron content from these newly mixed luboil samples was carried out 1. by direct AAS, 2. by AAS after incineration of the oil samples and 3. by a colorimetric method (to obtain the absolute value of the iron content). The results showed a recovery of the iron of only 50% if the wear particles were bigger than about 2 ..mu..m. That means that the true value of the iron content in a used lubricating oil is found by direct AAS only if the particle size is <=1 ..mu..m.

  4. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  5. [Ultra-Fine Pressed Powder Pellet Sample Preparation XRF Determination of Multi-Elements and Carbon Dioxide in Carbonate].

    Science.gov (United States)

    Li, Xiao-li; An, Shu-qing; Xu, Tie-min; Liu, Yi-bo; Zhang, Li-juan; Zeng, Jiang-ping; Wang, Na

    2015-06-01

    The main analysis error of pressed powder pellet of carbonate comes from particle-size effect and mineral effect. So in the article in order to eliminate the particle-size effect, the ultrafine pressed powder pellet sample preparation is used to the determination of multi-elements and carbon-dioxide in carbonate. To prepare the ultrafine powder the FRITSCH planetary Micro Mill machine and tungsten carbide media is utilized. To conquer the conglomeration during the process of grinding, the wet grinding is preferred. The surface morphology of the pellet is more smooth and neat, the Compton scatter effect is reduced with the decrease in particle size. The intensity of the spectral line is varied with the change of the particle size, generally the intensity of the spectral line is increased with the decrease in the particle size. But when the particle size of more than one component of the material is decreased, the intensity of the spectral line may increase for S, Si, Mg, or decrease for Ca, Al, Ti, K, which depend on the respective mass absorption coefficient . The change of the composition of the phase with milling is also researched. The incident depth of respective element is given from theoretical calculation. When the sample is grounded to the particle size of less than the penetration depth of all the analyte, the effect of the particle size on the intensity of the spectral line is much reduced. In the experiment, when grounded the sample to less than 8 μm(d95), the particle-size effect is much eliminated, with the correction method of theoretical α coefficient and the empirical coefficient, 14 major, minor and trace element in the carbonate can be determined accurately. And the precision of the method is much improved with RSD element, the fluorescence yield is low and the interference is serious. With the manual multi-layer crystal PX4, coarse collimator, empirical correction, X-ray spectrometer can be used to determine the carbon dioxide in the carbonate

  6. The Effect of Sterilization on Size and Shape of Fat Globules in Model Processed Cheese Samples

    Directory of Open Access Journals (Sweden)

    B. Tremlová

    2006-01-01

    Full Text Available Model cheese samples from 4 independent productions were heat sterilized (117 °C, 20 minutes after the melting process and packing with an aim to prolong their durability. The objective of the study was to assess changes in the size and shape of fat globules due to heat sterilization by using image analysis methods. The study included a selection of suitable methods of preparation mounts, taking microphotographs and making overlays for automatic processing of photographs by image analyser, ascertaining parameters to determine the size and shape of fat globules and statistical analysis of results obtained. The results of the experiment suggest that changes in shape of fat globules due to heat sterilization are not unequivocal. We found that the size of fat globules was significantly increased (p < 0.01 due to heat sterilization (117 °C, 20 min, and the shares of small fat globules (up to 500 μm2, or 100 μm2 in the samples of heat sterilized processed cheese were decreased. The results imply that the image analysis method is very useful when assessing the effect of technological process on the quality of processed cheese quality.

  7. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  8. Determination of the particle size distribution in a powder using radiotracers

    International Nuclear Information System (INIS)

    Revilla D, R.

    1974-01-01

    To determine experimentally the particle size distribution in a powder the meshed method is generally used. This method has the disadvantage that in the obtained distribution is not observed at detail the fine structure of such distribution. In this work, a method for obtaining the distribution of particle size using radiotracers is presented. In the obtained distribution by this method it is observed with more detail the fine structure of the distribution, comparing with the obtained results by the classical method of meshed. The radiotracer method has major resolution for the experimental determination mentioned. In the chapter 1, it is done a brief analysis about theoretical aspects related with the method. In the first part it is analysed the particle behavior (sedimenting) in a fluid. The second part treats the relating with the radioactivity of an activated material as well as its detection. In the chapter 2, a description of the method is done also the experimental problems to applying to the alumina crystals sample are discussed. In the chapter 3 the obtained results and the mistake calculations in such results are showed. Finally, in the chapter 4 the conclusions and recommendations are given which is possible to obtain better results and improve to those in this work were obtained. (Author)

  9. Bovine liver sample preparation and micro-homogeneity study for Cu and Zn determination by solid sampling electrothermal atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Nomura, Cassiana S.; Silva, Cintia S.; Nogueira, Ana R.A.; Oliveira, Pedro V.

    2005-01-01

    This work describes a systematic study for the bovine liver sample preparation for Cu and Zn determination by solid sampling electrothermal atomic absorption spectrometry. The main parameters investigated were sample drying, grinding process, particle size, sample size, microsample homogeneity, and their relationship with the precision and accuracy of the method. A bovine liver sample was prepared using different drying procedures: (1) freeze drying, and (2) drying in a household microwave oven followed by drying in a stove at 60 deg. C until constant mass. Ball and cryogenic mills were used for grinding. Less sensitive wavelengths for Cu (216.5 nm) and Zn (307.6 nm), and Zeeman-based three-field background correction for Cu were used to diminish the sensitivities. The pyrolysis and atomization temperatures adopted were 1000 deg. C and 2300 deg. C for Cu, and 700 deg. C and 1700 deg. C for Zn, respectively. For both elements, it was possible to calibrate the spectrometer with aqueous solutions. The use of 250 μg of W + 200 μg of Rh as permanent chemical modifier was imperative for Zn. Under these conditions, the characteristic mass and detection limit were 1.4 ng and 1.6 ng for Cu, and 2.8 ng and 1.3 ng for Zn, respectively. The results showed good agreement (95% confidence level) for homogeneity of the entire material (> 200 mg) when the sample was dried in microwave/stove and ground in a cryogenic mill. The microsample homogeneity study showed that Zn is more dependent on the sample pretreatment than Cu. The bovine liver sample prepared in microwave/stove and ground in a cryogenic mill presented results with the lowest relative standard deviation for Cu than Zn. Good accuracy and precision were observed for bovine liver masses higher than 40 μg for Cu and 30 μg for Zn. The concentrations of Cu and Zn in the prepared bovine liver sample were 223 mg kg - 1 and 128 mg kg - 1 , respectively. The relative standard deviations were lower than 6% (n = 5). The

  10. Porosity and pore size distribution determination of Tumblagooda formation sandstone by X-ray microtomography

    International Nuclear Information System (INIS)

    Fernandes, Jaquiel S.; Appoloni, Carlos R.; Moreira, Anderson C.

    2007-01-01

    Microstructural parameters evaluations of reservoir rocks are very important to petroleum industry. This work presents total porosity and pore size distribution measurement of a sandstone sample from the Tumblagooda formation, collected at Kalbarri National Park in Australia. Porosity and pores size distribution were determined using X-Ray microtomography and imaging techniques. For these measurements, it was employed a micro-CT (μ-CT) Skyscan system model 1172 with conical beam, operated with a 1 mm Al filter at 80 kV and 125 μA, respectively, and a 2000 x 1048 pixels CCD camera. The sample was rotated from 0 deg to 180 deg, in step of 0.5 deg. For the considered sample, this equipment provided images with 2.9 μm spatial resolution. Six hundreds 2-D images where reconstructed with the Skyscan NRecon software, which were analyzed with the aid of Imago software, developed at the Laboratory of Porous Media and Thermophysical Properties (LMPT), Department of Mechanical Engineering, Federal University of Santa Catarina, Brazil, in association with the Brazilian software company Engineering Simulation and Scientific Software (ESSS), and Petroleo Brasileiro SA (PETROBRAS) Research and Development Center (CENPES). The determined average porosity was 11.45 ±1.53 %. Ninety five percent of the porous phase refers to pores with radius ranging from 2.9 to 85.2 μm, presenting the larger frequency (7.7 %) at 11.7 μm radius. (author)

  11. Aggregate size and structure determination of nanomaterials in physiological media: importance of dynamic evolution

    Science.gov (United States)

    Afrooz, A. R. M. Nabiul; Hussain, Saber M.; Saleh, Navid B.

    2014-12-01

    Most in vitro nanotoxicological assays are performed after 24 h exposure. However, in determining size and shape effect of nanoparticles in toxicity assays, initial characterization data are generally used to describe experimental outcome. The dynamic size and structure of aggregates are typically ignored in these studies. This brief communication reports dynamic evolution of aggregation characteristics of gold nanoparticles. The study finds that gradual increase in aggregate size of gold nanospheres (AuNS) occurs up to 6 h duration; beyond this time period, the aggregation process deviates from gradual to a more abrupt behavior as large networks are formed. Results of the study also show that aggregated clusters possess unique structural conformation depending on nominal diameter of the nanoparticles. The differences in fractal dimensions of the AuNS samples likely occurred due to geometric differences, causing larger packing propensities for smaller sized particles. Both such observations can have profound influence on dosimetry for in vitro nanotoxicity analyses.

  12. Aggregate size and structure determination of nanomaterials in physiological media: importance of dynamic evolution

    International Nuclear Information System (INIS)

    Afrooz, A. R. M. Nabiul; Hussain, Saber M.; Saleh, Navid B.

    2014-01-01

    Most in vitro nanotoxicological assays are performed after 24 h exposure. However, in determining size and shape effect of nanoparticles in toxicity assays, initial characterization data are generally used to describe experimental outcome. The dynamic size and structure of aggregates are typically ignored in these studies. This brief communication reports dynamic evolution of aggregation characteristics of gold nanoparticles. The study finds that gradual increase in aggregate size of gold nanospheres (AuNS) occurs up to 6 h duration; beyond this time period, the aggregation process deviates from gradual to a more abrupt behavior as large networks are formed. Results of the study also show that aggregated clusters possess unique structural conformation depending on nominal diameter of the nanoparticles. The differences in fractal dimensions of the AuNS samples likely occurred due to geometric differences, causing larger packing propensities for smaller sized particles. Both such observations can have profound influence on dosimetry for in vitro nanotoxicity analyses.Graphical Abstract

  13. Aggregate size and structure determination of nanomaterials in physiological media: importance of dynamic evolution

    Energy Technology Data Exchange (ETDEWEB)

    Afrooz, A. R. M. Nabiul [The University of Texas, Civil, Architectural and Environmental Engineering (United States); Hussain, Saber M. [Wright-Patterson AFB, Human Effectiveness Directorate, 711th Human Performance Wing, Air Force Research Laboratory (United States); Saleh, Navid B., E-mail: navid.saleh@utexas.edu [The University of Texas, Civil, Architectural and Environmental Engineering (United States)

    2014-12-15

    Most in vitro nanotoxicological assays are performed after 24 h exposure. However, in determining size and shape effect of nanoparticles in toxicity assays, initial characterization data are generally used to describe experimental outcome. The dynamic size and structure of aggregates are typically ignored in these studies. This brief communication reports dynamic evolution of aggregation characteristics of gold nanoparticles. The study finds that gradual increase in aggregate size of gold nanospheres (AuNS) occurs up to 6 h duration; beyond this time period, the aggregation process deviates from gradual to a more abrupt behavior as large networks are formed. Results of the study also show that aggregated clusters possess unique structural conformation depending on nominal diameter of the nanoparticles. The differences in fractal dimensions of the AuNS samples likely occurred due to geometric differences, causing larger packing propensities for smaller sized particles. Both such observations can have profound influence on dosimetry for in vitro nanotoxicity analyses.Graphical Abstract.

  14. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    International Nuclear Information System (INIS)

    Haugboel, Steven; Pinborg, Lars H.; Arfan, Haroon M.; Froekjaer, Vibe M.; Svarer, Claus; Knudsen, Gitte M.; Madsen, Jacob; Dyrby, Tim B.

    2007-01-01

    To determine the reproducibility of measurements of brain 5-HT 2A receptors with an [ 18 F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects reproducibility and the required sample size. For assessment of the variability, six subjects were investigated with [ 18 F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [ 18 F]altanserin PET studies in 84 healthy subjects. Regions of interest were automatically delineated on co-registered MR and PET images. In cortical brain regions with a high density of 5-HT 2A receptors, the outcome parameter (binding potential, BP 1 ) showed high reproducibility, with a median difference between the two group measurements of 6% (range 5-12%), whereas in regions with a low receptor density, BP 1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. This study demonstrates that [ 18 F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT 2A receptor density. Moreover, partial volume correction considerably reduces the sample size required to detect regional changes between groups. (orig.)

  15. Determination of denaturated proteins and biotoxins by on-line size-exclusion chromatography-digestion-liquid chromatography-electrospray mass spectrometry

    NARCIS (Netherlands)

    Carol, J.; Gorseling, M.C.J.K.; Jong, C.F. de; Lingeman, H.; Kientz, C.E.; Baar, B.L.M. van; Irth, H.

    2005-01-01

    A multidimensional analytical method for the rapid determination and identification of proteins has been developed. The method is based on the size-exclusion fractionation of protein-containing samples, subsequent on-line trypsin digestion and desalination, and reversed-phase high-performance liquid

  16. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  17. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. EDXRF applied to the chemical element determination of small invertebrate samples

    Energy Technology Data Exchange (ETDEWEB)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de, E-mail: marcelo_rlm@hotmail.com, E-mail: marianasantos_ufpe@hotmail.com, E-mail: rebecanuclear@gmail.com, E-mail: thomasmarques@live.com.pt, E-mail: ejfranca@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2015-07-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E{sub n} Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  19. The Determinants of Venture Capital Portfolio Size: Empirical Evidence

    OpenAIRE

    Douglas J. Cumming

    2006-01-01

    This paper explores factors that affect portfolio size among a sample of venture capital financing data from 214 Canadian funds. Four categories of factors affect portfolio size: (1) the venture capital funds' characteristics, including the type of fund, fund duration, fund-raising, and the number of venture capital fund managers; (2) the entrepreneurial firms' characteristics, including stage of development, technology, and geographic location; (3) the nature of the financing transactions, i...

  20. Socioeconomic Determinants of Bullying in the Workplace: A National Representative Sample in Japan

    OpenAIRE

    Tsuno, Kanami; Kawakami, Norito; Tsutsumi, Akizumi; Shimazu, Akihito; Inoue, Akiomi; Odagiri, Yuko; Yoshikawa, Toru; Haratani, Takashi; Shimomitsu, Teruichi; Kawachi, Ichiro

    2015-01-01

    Bullying in the workplace is an increasingly recognized threat to employee health. We sought to test three hypotheses related to the determinants of workplace bullying: power distance at work; safety climate; and frustration related to perceived social inequality. A questionnaire survey was administered to a nationally representative community-based sample of 5,000 residents in Japan aged 20-60 years. The questionnaire included questions about employment, occupation, company size, education, ...

  1. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Science.gov (United States)

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately

  2. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  3. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    Energy Technology Data Exchange (ETDEWEB)

    Davarani, Saied Saeed Hosseiny, E-mail: ss-hosseiny@cc.sbu.ac.ir [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Rezayati zad, Zeinab [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Taheri, Ali Reza; Rahmatian, Nasrin [Islamic Azad University, Ilam Branch, Ilam (Iran, Islamic Republic of)

    2017-02-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ{sub max} 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L{sup ‐1} respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine

  4. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    International Nuclear Information System (INIS)

    Davarani, Saied Saeed Hosseiny; Rezayati zad, Zeinab; Taheri, Ali Reza; Rahmatian, Nasrin

    2017-01-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ max 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L ‐1 respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine-molecular imprinting

  5. Determination of acrolein, ethanol, volatile acidity, and copper in different samples of sugarcane spirits

    Directory of Open Access Journals (Sweden)

    José Masson

    2012-09-01

    Full Text Available Seventy-one samples of sugarcane spirits from small and average size stills produced in the northern and southern Minas Gerais (Brazil were analyzed for acrolein using HPLC (High Performance Liquid Chromatography. Ethanol and copper concentrations and volatile acidity were also determined according to methods established by the Ministry of Agriculture, Livestock and Supply (MAPA. A total of 9.85% of the samples tested showed levels of acrolein above the legal limits, while the copper concentrations of 21.00% of the samples and the volatile acidity of 8.85% of the samples were higher than the limits established by the Brazilian legislation. The concentration of acrolein varied from 0 to 21.97 mg.100 mL-1 of ethanol. However, no significant difference at 5% of significance was observed between the samples produced in the northern and southern Minas Gerais. The method used for determination of acrolein in sugarcane spirits involved the formation of a derivative with 2,4-dinitrophenylhydrazine (2,4-DNPH and subsequent analysis by HPLC.

  6. Quantitative determination of grain sizes by means of scattered ultrasound

    International Nuclear Information System (INIS)

    Goebbels, K.; Hoeller, P.

    1976-01-01

    The scattering of ultrasounds makes possible the quantitative determination of grain sizes in metallic materials. Examples of measurements on steels with grain sizes between ASTM 1 and ASTM 12 are given

  7. Determination of size distribution of small DNA fragments by polyacrylamide gel electrophoresis

    International Nuclear Information System (INIS)

    Lau How Mooi

    1998-01-01

    Size distribution determination of DNA fragments can be normally determined by the agarose gel electrophoresis, including the normal DNA banding pattern analysis. However this method is only good for large DNA, such as the DNA of the size of kilo base pairs to mega base pairs range. DNA of size less than kilo base pairs is difficult to be quantified by the agarose gel method. Polyacrylamide gel electrophoresis however can be used to measure the quantity of DNA fragments of size less than kilo base pairs in length, down to less than ten base pairs. This method is good for determining the quantity of the smaller size DNA, single stranded polymers or even some proteins, if the known standards are available. In this report detail description of the method of preparing the polyacrylamide gel, and the experimental set up is discussed. Possible uses of this method, and the comparison with the standard sizes of DNA is also shown. This method is used to determine the distribution of the amount of the fragmented DNA after the Calf-thymus DNA has been exposed to various types of radiation and of different doses. The standards were used to determine the sizes of the fragmented Calf-thymus DNA. The higher the dose the higher is the amount of the smaller size DNA measured

  8. Microscopic determination of the PuO2 grain size and pore size distribution of MOX pellets with an image analysis system

    International Nuclear Information System (INIS)

    Vandezande, J.

    2000-01-01

    The industrial way to obtain the Pu distribution in a MOX pellet is by Image Analysis. The PuO 2 grains are made visible by alpha-autoradiography. Along with the Pu distribution the pore structure is an item which is examined, the latter is determined on the unetched sample. After the visualization of the sample structure, the sample is evaluated with an Image Analysis System. Each image is enhanced and a distinction is made between the objects to be measured and the matrix. The relevant parameters are then analyzed. When the overall particle distribution is wanted, all identified particles are measured and classified in size groups, based on a logarithmic scale. The possible conversion of two-dimensional diameters to three-dimensional diameters is accomplished by application of the Saltykov algorithm. When a single object is of interest, the object is selected interactively, and the result is reported to the user. (author)

  9. Sample preparation and biomass determination of SRF model mixture using cryogenic milling and the adapted balance method

    Energy Technology Data Exchange (ETDEWEB)

    Schnöller, Johannes, E-mail: johannes.schnoeller@chello.at; Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut

    2014-11-15

    Highlights: • An alternative sample comminution procedure for SRF is tested. • Proof of principle is shown on a SRF model mixture. • The biogenic content of the SRF is analyzed with the adapted balance method. • The novel method combines combustion analysis and a data reconciliation algorithm. • Factors for the variance of the analysis results are statistically quantified. - Abstract: The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1–5% of the data obtained by the reference methods, selective dissolution method (SDM) and {sup 14}C-method ({sup 14}C-M)

  10. A simple method of correcting for variation of sample thickness in the determination of the activity of environmental samples by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a well established method of determining the activity of radioactive components in environmental samples. It is usual to maintain precisely the same counting geometry in measurements on samples under investigation as in the calibration measurements on standard materials of known activity, thus avoiding perceived uncertainties and complications in correcting for changes in counting geometry. However this may not always be convenient if, as on some occasions, only a small quantity of sample material is available for analysis. A procedure which avoids re-calibration for each sample size is described and is shown to be simple to use without significantly reducing the accuracy of measurement of the activity of typical environmental samples. The correction procedure relates to the use of cylindrical samples at a constant distance from the detector, the samples all having the same diameter but various thicknesses being permissible. (author)

  11. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  12. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  13. Determination of a novel size proxy in comparative morphometrics

    Directory of Open Access Journals (Sweden)

    Andrew Gallagher

    2015-09-01

    Full Text Available Absolute size is a critical determinant of organismal biology, yet there exists no real consensus as to what particular metric of ‘size’ is empirically valid in assessments of extinct mammalian taxa. The methodological approach of JE Mosimann has found extensive favour in ‘size correction’ in comparative morphometrics, but not ‘size prediction’ in palaeontology and palaeobiology. Analyses of five distinct mammalian data sets confirm that a novel size variate (GMSize derived from k=8 dimensions of the postcranial skeleton effectively satisfies all expectations of the Jolicoeur–Mosimann theorem of univariate and multivariate size. On the basis of strong parametric correlations between the k=8 variates and between scores derived from the first principal component and geometric mean size (GMSize in all series, this novel size variable has considerable utility in comparative vertebrate morphometrics and palaeobiology as an appropriate descriptor of individual size in extant and extinct taxa.

  14. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    DEFF Research Database (Denmark)

    Haugbøl, Steven; Pinborg, Lars H; Arfan, Haroon M

    2006-01-01

    PURPOSE: To determine the reproducibility of measurements of brain 5-HT2A receptors with an [18F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects...... reproducibility and the required sample size. METHODS: For assessment of the variability, six subjects were investigated with [18F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [18F]altanserin PET studies in 84 healthy subjects....... Regions of interest were automatically delineated on co-registered MR and PET images. RESULTS: In cortical brain regions with a high density of 5-HT2A receptors, the outcome parameter (binding potential, BP1) showed high reproducibility, with a median difference between the two group measurements of 6...

  15. Gas chromatographic determination of cholesterol from food samples using extraction/saponification method

    International Nuclear Information System (INIS)

    Ali, Z.M.; Soomro, A.S.A.

    2007-01-01

    A simple and fast one-step extraction/saponification with Na/OH/KOH (Ethanolic, Sodium Hydroxide/Potassium Hydroxide was compared and validated for determination of cholesterol from locally available 10 edible oil and egg samples. The importance of the use of edible oils and eggs in routine diet is unquestionable, but presence of cholesterol is considered as a risk factor for coronary heart disease and hypertension. The lowering of cholesterol level in order to reduce the risk is widely accepted. The cholesterol in the edible oil and eggs was determined by gas chromatography, through elution from the column (2x3 mm i.d) packed with 3% OV-I01, on Chromosorb G/'NAW 80-100 mesh size at 250-300C with programmed heating rate of 3 degree C/min. Nitrogen gas flow rate was 40 ml/min. The cholesterol samples were run under the conditions after selective extraction in diethyl ether. The calibration was linear within 50-500 IJg/ml concentration range. The amount of cholesterol detected were from 12.92-18.05 mg/g in edible oil and 117.54-143.42 mg/g in egg samples with RSD 1.3-2.7%. (author)

  16. Comparison of sampling and test methods for determining asphalt content and moisture correction in asphalt concrete mixtures.

    Science.gov (United States)

    1985-03-01

    The purpose of this report is to identify the difference, if any, in AASHTO and OSHD test procedures and results. This report addresses the effect of the size of samples taken in the field and evaluates the methods of determining the moisture content...

  17. Volumetric determination of tumor size abdominal masses. Problems -feasabilities

    International Nuclear Information System (INIS)

    Helmberger, H.; Bautz, W.; Sendler, A.; Fink, U.; Gerhardt, P.

    1995-01-01

    The most important indication for clinically reliable volumetric determination of tumor size in the abdominal region is monitoring liver metastases during chemotherapy. Determination of volume can be effectively realized using 3D reconstruction. Therefore, the primary data set must be complete and contiguous. The mass should be depicted strongly enhanced and free of artifacts. At present, this prerequisite can only be complied with using thin-slice spiral CT. Phantom studies have proven that a semiautomatic reconstruction algorithm is recommendable. The basic difficulties involved in volumetric determination of tumor size are the problems in differentiating active malignant mass and changes in the surrounding tissue, as well as the lack of histomorphological correlation. Possible indications for volumetry of gastrointestinal masses in the assessment of neoadjuvant therapeutic concepts are under scientific evaluation. (orig./MG) [de

  18. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    International Nuclear Information System (INIS)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-01-01

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  19. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  20. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Determination of particle size and content of metals in the atmosphere of ZMCM (Metropolitan Zone of Mexico City)

    International Nuclear Information System (INIS)

    Aldape U, F.; Flores M, J.; Diaz, R.V.; Garcia G, R.

    1994-01-01

    Inside the breathable fraction of the atmosphere of Mexico City, the presence of metals in suspended particles, is determined and quantified. The detection was carry out simultaneously in three places of the city, using collectors of the type stacking filter unit (SFU) which allow the separation of particles according to its size. The SFU detectors allow the separation in two size: 'Gross' mass from 2.5 to 1.5 μm and 'fine' mass for particles smallest than 2.5 μm. The analysis of the samples was fulfilled by means of PIXE method. Samples were irradiated with a proton beam, and based in the X-ray spectra the elements were identified and quantified, which allow to establish the temporal behavior of the concentrations per element for gross mass and fine mass in each one of the places of sampling. (Author)

  2. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  3. A scenario tree model for the Canadian Notifiable Avian Influenza Surveillance System and its application to estimation of probability of freedom and sample size determination.

    Science.gov (United States)

    Christensen, Jette; Stryhn, Henrik; Vallières, André; El Allaki, Farouk

    2011-05-01

    In 2008, Canada designed and implemented the Canadian Notifiable Avian Influenza Surveillance System (CanNAISS) with six surveillance activities in a phased-in approach. CanNAISS was a surveillance system because it had more than one surveillance activity or component in 2008: passive surveillance; pre-slaughter surveillance; and voluntary enhanced notifiable avian influenza surveillance. Our objectives were to give a short overview of two active surveillance components in CanNAISS; describe the CanNAISS scenario tree model and its application to estimation of probability of populations being free of NAI virus infection and sample size determination. Our data from the pre-slaughter surveillance component included diagnostic test results from 6296 serum samples representing 601 commercial chicken and turkey farms collected from 25 August 2008 to 29 January 2009. In addition, we included data from a sub-population of farms with high biosecurity standards: 36,164 samples from 55 farms sampled repeatedly over the 24 months study period from January 2007 to December 2008. All submissions were negative for Notifiable Avian Influenza (NAI) virus infection. We developed the CanNAISS scenario tree model, so that it will estimate the surveillance component sensitivity and the probability of a population being free of NAI at the 0.01 farm-level and 0.3 within-farm-level prevalences. We propose that a general model, such as the CanNAISS scenario tree model, may have a broader application than more detailed models that require disease specific input parameters, such as relative risk estimates. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  4. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  5. Traceable size determination of PMMA nanoparticles based on Small Angle X-ray Scattering (SAXS)

    Energy Technology Data Exchange (ETDEWEB)

    Gleber, G; Cibik, L; Mueller, P; Krumrey, M [Physikalisch-Technische Bundesanstalt (PTB), Abbestrasse 2-12, 10587 Berlin (Germany); Haas, S; Hoell, A, E-mail: gudrun.gleber@ptb.d [Helmholtz-Zentrum-Berlin fuer Materialien und Energie (HZB), Albert-Einstein-Strasse 15, 12489 Berlin (Germany)

    2010-10-01

    The size and size distribution of PMMA nanoparticles has been investigated with SAXS (small angle X-ray scattering) using monochromatized synchrotron radiation. The uncertainty has contributions from the wavelength or photon energy of the radiation, the scattering angle and the fit procedure for the obtained scattering curves. The wavelength can be traced back to the lattice constant of silicon, and the scattering angle is traceable via geometric measurements of the detector pixel size and the distance between the sample and the detector. SAXS measurements and data evaluations have been performed at different distances and photon energies for two PMMA nanoparticle suspensions with low polydispersity and nominal diameters of 108 nm and 192 nm, respectively, as well as for a mixture of both. The relative variation of the diameters obtained for different experimental conditions was below {+-} 0.3 %. The determined number-weighted mean diameters of (109.0 {+-} 0.7) nm and (188.0 {+-} 1.3) nm, respectively, are close to the nominal values.

  6. Traceable size determination of PMMA nanoparticles based on Small Angle X-ray Scattering (SAXS)

    Science.gov (United States)

    Gleber, G.; Cibik, L.; Haas, S.; Hoell, A.; Müller, P.; Krumrey, M.

    2010-10-01

    The size and size distribution of PMMA nanoparticles has been investigated with SAXS (small angle X-ray scattering) using monochromatized synchrotron radiation. The uncertainty has contributions from the wavelength or photon energy of the radiation, the scattering angle and the fit procedure for the obtained scattering curves. The wavelength can be traced back to the lattice constant of silicon, and the scattering angle is traceable via geometric measurements of the detector pixel size and the distance between the sample and the detector. SAXS measurements and data evaluations have been performed at different distances and photon energies for two PMMA nanoparticle suspensions with low polydispersity and nominal diameters of 108 nm and 192 nm, respectively, as well as for a mixture of both. The relative variation of the diameters obtained for different experimental conditions was below ± 0.3 %. The determined number-weighted mean diameters of (109.0 ± 0.7) nm and (188.0 ± 1.3) nm, respectively, are close to the nominal values.

  7. Traceable size determination of PMMA nanoparticles based on Small Angle X-ray Scattering (SAXS)

    International Nuclear Information System (INIS)

    Gleber, G; Cibik, L; Mueller, P; Krumrey, M; Haas, S; Hoell, A

    2010-01-01

    The size and size distribution of PMMA nanoparticles has been investigated with SAXS (small angle X-ray scattering) using monochromatized synchrotron radiation. The uncertainty has contributions from the wavelength or photon energy of the radiation, the scattering angle and the fit procedure for the obtained scattering curves. The wavelength can be traced back to the lattice constant of silicon, and the scattering angle is traceable via geometric measurements of the detector pixel size and the distance between the sample and the detector. SAXS measurements and data evaluations have been performed at different distances and photon energies for two PMMA nanoparticle suspensions with low polydispersity and nominal diameters of 108 nm and 192 nm, respectively, as well as for a mixture of both. The relative variation of the diameters obtained for different experimental conditions was below ± 0.3 %. The determined number-weighted mean diameters of (109.0 ± 0.7) nm and (188.0 ± 1.3) nm, respectively, are close to the nominal values.

  8. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  9. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  10. Sampling considerations when analyzing micrometric-sized particles in a liquid jet using laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Faye, C.B.; Amodeo, T.; Fréjafon, E. [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France); Delepine-Gilon, N. [Institut des Sciences Analytiques, 5 rue de la Doua, 69100 Villeurbanne (France); Dutouquet, C., E-mail: christophe.dutouquet@ineris.fr [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France)

    2014-01-01

    Pollution of water is a matter of concern all over the earth. Particles are known to play an important role in the transportation of pollutants in this medium. In addition, the emergence of new materials such as NOAA (Nano-Objects, their Aggregates and their Agglomerates) emphasizes the need to develop adapted instruments for their detection. Surveillance of pollutants in particulate form in waste waters in industries involved in nanoparticle manufacturing and processing is a telling example of possible applications of such instrumental development. The LIBS (laser-induced breakdown spectroscopy) technique coupled with the liquid jet as sampling mode for suspensions was deemed as a potential candidate for on-line and real time monitoring. With the final aim in view to obtain the best detection limits, the interaction of nanosecond laser pulses with the liquid jet was examined. The evolution of the volume sampled by laser pulses was estimated as a function of the laser energy applying conditional analysis when analyzing a suspension of micrometric-sized particles of borosilicate glass. An estimation of the sampled depth was made. Along with the estimation of the sampled volume, the evolution of the SNR (signal to noise ratio) as a function of the laser energy was investigated as well. Eventually, the laser energy and the corresponding fluence optimizing both the sampling volume and the SNR were determined. The obtained results highlight intrinsic limitations of the liquid jet sampling mode when using 532 nm nanosecond laser pulses with suspensions. - Highlights: • Micrometric-sized particles in suspensions are analyzed using LIBS and a liquid jet. • The evolution of the sampling volume is estimated as a function of laser energy. • The sampling volume happens to saturate beyond a certain laser fluence. • Its value was found much lower than the beam diameter times the jet thickness. • Particles proved not to be entirely vaporized.

  11. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  12. Final report on: Grain size determination in zirconium alloys (IAEA Research Contract No. 6025/Rb.)

    International Nuclear Information System (INIS)

    Martinez M, E.

    1991-12-01

    In spite of the amount of research developed the knowledge still is far from complete and in this basis the International Atomic Energy Agency, (IAEA), by means of the Working Group on Water Reactor Fuel Performance and Technology, initiated, in 1990 the Coordinated Research Programme named Grain Size Determination In Zirconium Alloys. Several countries were invited to participate and to contribute to the main objective of the programme, which can be state as: To develop a unified metallographic technique capable to show the microstructure of zircaloy in a reproducible and uniform manner. To fulfill the objective the following goals were established: A. To measure the grain size and perform an statistical treatment, in samples prepared specifically to show different amounts of cold work, recrystallization and grain growth. B. To compare the results obtained by the different laboratories involved in the programme. C. Finally, after the Ugine meeting, also the determination of the recrystallization and grain growth kinetics. (Author)

  13. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  14. Determination of plutonium in air and smear samples

    International Nuclear Information System (INIS)

    Hinton, E.R. Jr.; Tucker, W.O.

    1981-01-01

    A method has been developed for the determination of plutonium in air samples and smear samples that were collected on filter papers. The sample papers are digested in nitric acid, extracted into 2-thenoyltrifluoroacetone (TTA)-xylene, and evaporated onto stainless steel disks. Alpha spectrometry is employed to determine the activity of each plutonium isotope. Each sample is spiked with plutonium-236. All glassware used in the procedure is disposable. The detection limits are 3 and 5 dpm (disintegrations per minute) for air and smear samples, respectively, with an average recovery of 87%

  15. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  16. Cytoplasmic streaming velocity as a plant size determinant.

    Science.gov (United States)

    Tominaga, Motoki; Kimura, Atsushi; Yokota, Etsuo; Haraguchi, Takeshi; Shimmen, Teruo; Yamamoto, Keiichi; Nakano, Akihiko; Ito, Kohji

    2013-11-11

    Cytoplasmic streaming is active transport widely occurring in plant cells ranging from algae to angiosperms. Although it has been revealed that cytoplasmic streaming is generated by organelle-associated myosin XI moving along actin bundles, the fundamental function in plants remains unclear. We generated high- and low-speed chimeric myosin XI by replacing the motor domains of Arabidopsis thaliana myosin XI-2 with those of Chara corallina myosin XI and Homo sapiens myosin Vb, respectively. Surprisingly, the plant sizes of the transgenic Arabidopsis expressing high- and low-speed chimeric myosin XI-2 were larger and smaller, respectively, than that of the wild-type plant. This size change correlated with acceleration and deceleration, respectively, of cytoplasmic streaming. Our results strongly suggest that cytoplasmic streaming is a key determinant of plant size. Furthermore, because cytoplasmic streaming is a common system for intracellular transport in plants, our system could have applications in artificial size control in plants. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Determination of Slake Durability Index (Sdi) Values on Different Shape of Laminated Marl Samples

    Science.gov (United States)

    Ankara, Hüseyin; Çiçek, Fatma; Talha Deniz, İsmail; Uçak, Emre; Yerel Kandemir, Süheyla

    2016-10-01

    The slake durability index (SDI) test is widely used to determine the disintegration characteristic of the weak and clay-bearing rocks in geo-engineering problems. However, due to the different shapes of sample pieces, such as, irregular shapes displayed mechanical breakages in the slaking process, the SDI test has some limitations that affect the index values. In addition, shape and surface roughness of laminated marl samples have a severe influence on the SDI. In this study, a new sample preparation method called Pasha Method was used to prepare spherical specimens from the laminated marl collected from Seyitomer collar (SLI). Moreover the SDI tests were performed on equal size and weight specimens: three sets with different shapes were used. The three different sets were prepared as the test samples which had sphere shape, parallel to the layers in irregular shape, and vertical to the layers in irregular shape. Index values were determined for the three different sets subjected to the SDI test for 4 cycles. The index values at the end of fourth cycle were found to be 98.43, 98.39 and 97.20 %, respectively. As seen, the index values of the sphere sample set were found to be higher than irregular sample sets.

  18. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  19. Spectrophotometric Determination of Boron in Environmental Water Samples

    International Nuclear Information System (INIS)

    San San; Khin Win Kyi; Kwaw Naing

    2002-02-01

    The present paper deals with the study on the methods for the determination of boron in the environmental water samples. The standard methods which are useful for this determination are discussed thoroughly in this work. Among the standard methods approved by American Public Health Association, the carmine method was selected for this study. Prior to the determination of boron in the water samples, the precision and accuracy of the methods of choice were examined by using standard boron solutions. The determination of Boron was carried out by using water samples, waste water from Aquaculture Research Centre, University of Yangon, the Ayeyarwady River water near Magway Myathalon Pagoda in Magway Division, ground water from Sanchaung Township, and tap water from Universities' Research Centre, University of Yangon. Analyses of these water samples were done and statistical treatment of the results was carried out. (author)

  20. Determination of Mercury in Aqueous Samples by Means of Neutron Activation Analysis with an Account of Flux Disturbances

    Energy Technology Data Exchange (ETDEWEB)

    Brune, D; Jirlow, K

    1967-08-15

    The technique of low temperature neutron irradiation combined with isotopic exchange separation technique has been applied in the determination of mercury in aqueous samples. The kinetics of the isotopic exchange reaction has been studied for various sample volumes. The effect of the flux perturbation caused by aqueous samples has been investigated for samples of various size and geometry in a central position in a well moderated heavy water reactor. The effect has been studied both theoretically and experimentally. The 'Thermos' code has been used in the calculations.

  1. Determination of Mercury in Aqueous Samples by Means of Neutron Activation Analysis with an Account of Flux Disturbances

    International Nuclear Information System (INIS)

    Brune, D.; Jirlow, K.

    1967-08-01

    The technique of low temperature neutron irradiation combined with isotopic exchange separation technique has been applied in the determination of mercury in aqueous samples. The kinetics of the isotopic exchange reaction has been studied for various sample volumes. The effect of the flux perturbation caused by aqueous samples has been investigated for samples of various size and geometry in a central position in a well moderated heavy water reactor. The effect has been studied both theoretically and experimentally. The 'Thermos' code has been used in the calculations

  2. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  3. Determination of tritium in wine yeast samples

    International Nuclear Information System (INIS)

    Cotarlea, Monica-Ionela; Paunescu Niculina; Galeriu, D; Mocanu, N.; Margineanu, R.; Marin, G.

    1998-01-01

    Analytical procedures were developed to determine tritium in wine and wine yeast samples. The content of organic compounds affecting the LSC measurement is reduced by fractioning distillation for wine samples and azeotropic distillation/fractional distillation for wine yeast samples. Finally, the water samples were normally distilled with K MO 4 . The established procedures were successfully applied for wine and wine samples from Murfatlar harvests of the years 1995 and 1996. (authors)

  4. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  5. Dental arch dimensions, form and tooth size ratio among a Saudi sample

    Directory of Open Access Journals (Sweden)

    Haidi Omar

    2018-01-01

    Full Text Available Objectives: To determine the dental arch dimensions and arch forms in a sample of Saudi orthodontic patients, to investigate the prevalence of Bolton anterior and overall tooth size discrepancies, and to compare the effect of gender on the measured parameters. Methods: This study is a biometric analysis of dental casts of 149 young adults recruited from different orthodontic centers in Jeddah, Saudi Arabia. The dental arch dimensions were measured. The measured parameters were arch length, arch width, Bolton’s ratio, and arch form. The data were analyzed using IBM SPSS software version 22.0 (IBM Corporation, New York, USA; this cross-sectional study was conducted between April 2015 and May 2016. Results: Dental arch measurements, including inter-canine and inter-molar distance, were found to be significantly greater in males than females (p less than 0.05. The most prevalent dental arch forms were narrow tapered (50.3% and narrow ovoid (34.2%, respectively. The prevalence of tooth size discrepancy in all cases was 43.6% for anterior ratio and 24.8% for overall ratio. The mean Bolton’s anterior ratio in all malocclusion classes was 79.81%, whereas the mean Bolton’s overall ratio was 92.21%. There was no significant difference between males and females regarding Bolton’s ratio. Conclusion: The most prevalent arch form was narrow tapered, followed by narrow ovoid. Males generally had larger dental arch measurements than females, and the prevalence of tooth size discrepancy was more in Bolton’s anterior teeth ratio than in overall ratio.

  6. Transverse micro-erosion meter measurements; determining minimum sample size

    Science.gov (United States)

    Trenhaile, Alan S.; Lakhan, V. Chris

    2011-11-01

    Two transverse micro-erosion meter (TMEM) stations were installed in each of four rock slabs, a slate/shale, basalt, phyllite/schist, and sandstone. One station was sprayed each day with fresh water and the other with a synthetic sea water solution (salt water). To record changes in surface elevation (usually downwearing but with some swelling), 100 measurements (the pilot survey), the maximum for the TMEM used in this study, were made at each station in February 2010, and then at two-monthly intervals until February 2011. The data were normalized using Box-Cox transformations and analyzed to determine the minimum number of measurements needed to obtain station means that fall within a range of confidence limits of the population means, and the means of the pilot survey. The effect on the confidence limits of reducing an already small number of measurements (say 15 or less) is much greater than that of reducing a much larger number of measurements (say more than 50) by the same amount. There was a tendency for the number of measurements, for the same confidence limits, to increase with the rate of downwearing, although it was also dependent on whether the surface was treated with fresh or salt water. About 10 measurements often provided fairly reasonable estimates of rates of surface change but with fairly high percentage confidence intervals in slowly eroding rocks; however, many more measurements were generally needed to derive means within 10% of the population means. The results were tabulated and graphed to provide an indication of the approximate number of measurements required for given confidence limits, and the confidence limits that might be attained for a given number of measurements.

  7. The Macdonald and Savage titrimetric procedure scaled down to 4 mg sized plutonium samples. P. 1

    International Nuclear Information System (INIS)

    Kuvik, V.; Lecouteux, C.; Doubek, N.; Ronesch, K.; Jammet, G.; Bagliano, G.; Deron, S.

    1992-01-01

    The original Macdonald and Savage amperometric method scaled down to milligram-sized plutonium samples was further modified. The electro-chemical process of each redox step and the end-point of the final titration were monitored potentiometrically. The method is designed to determine 4 mg of plutonium dissolved in nitric acid solution. It is suitable for the direct determination of plutonium in non-irradiated fuel with a uranium-to-plutonium ratio of up to 30. The precision and accuracy are ca. 0.05-0.1% (relative standard deviation). Although the procedure is very selective, the following species interfere: vanadyl(IV) and vanadate (almost quantitatively), neptunium (one electron exchange per mole), nitrites, fluorosilicates (milligram amounts yield a slight bias) and iodates. (author). 15 refs.; 8 figs.; 7 tabs

  8. 13 CFR 121.1009 - What are the procedures for making the size determination?

    Science.gov (United States)

    2010-01-01

    ... small for purposes of a particular procurement, the concern cannot later become eligible for the.... (b) Basis for determination. The size determination will be based primarily on the information... whose size status is at issue. The determination, however, may also be based on grounds not raised in...

  9. Preparation of gold nanoparticles and determination of their particles size via different methods

    International Nuclear Information System (INIS)

    Iqbal, Muhammad; Usanase, Gisele; Oulmi, Kafia; Aberkane, Fairouz; Bendaikha, Tahar; Fessi, Hatem; Zine, Nadia; Agusti, Géraldine; Errachid, El-Salhi; Elaissari, Abdelhamid

    2016-01-01

    Graphical abstract: Preparation of gold nanoparticles via NaBH_4 reduction method, and determination of their particle size, size distribution and morphology by using different techniques. - Highlights: • Gold nanoparticles were synthesized by NaBH_4 reduction method. • Excess of reducing agent leads to tendency of aggregation. • The particle size, size distribution and morphology were investigated. • Particle size was determined both experimentally as well as theoretically. - Abstract: Gold nanoparticles have been used in various applications covering both electronics, biosensors, in vivo biomedical imaging and in vitro biomedical diagnosis. As a general requirement, gold nanoparticles should be prepared in large scale, easy to be functionalized by chemical compound of by specific ligands or biomolecules. In this study, gold nanoparticles were prepared by using different concentrations of reducing agent (NaBH_4) in various formulations and their effect on the particle size, size distribution and morphology was investigated. Moreover, special attention has been dedicated to comparison of particles size measured by various techniques, such as, light scattering, transmission electron microscopy, UV spectrum using standard curve and particles size calculated by using Mie theory and UV spectrum of gold nanoparticles dispersion. Particle size determined by various techniques can be correlated for monodispersed particles and excess of reducing agent leads to increase in the particle size.

  10. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  11. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  12. Determination of Pu in soil samples

    International Nuclear Information System (INIS)

    Torres C, C. O.; Hernandez M, H.; Romero G, E. T.; Vega C, H. R.

    2016-10-01

    The irreversible consequences of accidents occurring in nuclear plants and in nuclear fuel reprocessing sites are mainly the distribution of different radionuclides in different matrices such as the soil. The distribution in the superficial soil is related to the internal and external exposure to the radiation of the affected population. The internal contamination with radionuclides such as Pu is of great relevance to the nuclear forensic science, where is important to know the chemical and isotopic compositions of nuclear materials. The objective of this work is to optimize the radiochemical separation of plutonium (Pu) from soil samples and to determine their concentration. The soil samples were prepared using acid digestion assisted by microwave; purification of Pu was carried out with AG1X8 resin using ion exchange chromatography. Pu isotopes were measured using ICP-SFMS. In order to reduce the interference due to the presence of "2"3"8UH "+ in the samples, a solvent removal system (Apex) was used. In addition, the limit of detection and quantification of Pu was determined. It was found that the recovery efficiency of Pu in soil samples ranges from 70 to 93%. (Author)

  13. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  14. Association Studies and Legume Synteny Reveal Haplotypes Determining Seed Size in Vigna unguiculata.

    Science.gov (United States)

    Lucas, Mitchell R; Huynh, Bao-Lam; da Silva Vinholes, Patricia; Cisse, Ndiaga; Drabo, Issa; Ehlers, Jeffrey D; Roberts, Philip A; Close, Timothy J

    2013-01-01

    Highly specific seed market classes for cowpea and other grain legumes exist because grain is most commonly cooked and consumed whole. Size, shape, color, and texture are critical features of these market classes and breeders target development of cultivars for market acceptance. Resistance to biotic and abiotic stresses that are absent from elite breeding material are often introgressed through crosses to landraces or wild relatives. When crosses are made between parents with different grain quality characteristics, recovery of progeny with acceptable or enhanced grain quality is problematic. Thus genetic markers for grain quality traits can help in pyramiding genes needed for specific market classes. Allelic variation dictating the inheritance of seed size can be tagged and used to assist the selection of large seeded lines. In this work we applied 1,536-plex SNP genotyping and knowledge of legume synteny to characterize regions of the cowpea genome associated with seed size. These marker-trait associations will enable breeders to use marker-based selection approaches to increase the frequency of progeny with large seed. For 804 individuals derived from eight bi-parental populations, QTL analysis was used to identify markers linked to 10 trait determinants. In addition, the population structure of 171 samples from the USDA core collection was identified and incorporated into a genome-wide association study which supported more than half of the trait-associated regions important in the bi-parental populations. Seven of the total 10 QTLs were supported based on synteny to seed size associated regions identified in the related legume soybean. In addition to delivering markers linked to major trait determinants in the context of modern breeding, we provide an analysis of the diversity of the USDA core collection of cowpea to identify genepools, migrants, admixture, and duplicates.

  15. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    Energy Technology Data Exchange (ETDEWEB)

    Plionis, Alexander A [Los Alamos National Laboratory; Peterson, Dominic S [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory; Lamont, Stephen P [Los Alamos National Laboratory

    2009-01-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  16. In situ droplet size and speed determination in a fluid-bed granulator.

    Science.gov (United States)

    Ehlers, Henrik; Larjo, Jussi; Antikainen, Osmo; Räikkönen, Heikki; Heinämäki, Jyrki; Yliruusi, Jouko

    2010-05-31

    The droplet size affects the final product in fluid-bed granulation and coating. In the present study, spray characteristics of aqueous granulation liquid (purified water) were determined in situ in a fluid-bed granulator. Droplets were produced by a pneumatic nozzle. Diode laser stroboscopy (DLS) was used for droplet detection and particle tracking velocimetry (PTV) was used for determination of droplet size and speed. Increased atomization pressure decreased the droplet size and the effect was most strongly visible in the 90% size fractile. The droplets seemed to undergo coalescence after which only slight evaporation occurred. Furthermore, the droplets were subjected to a strong turbulence at the event of atomization, after which the turbulence reached a minimum value in the lower halve of the chamber. The turbulence increased as speed and droplet size decreased due to the effects of the fluidizing air. The DLS and PTV system used was found to be a useful and rapid tool in determining spray characteristics and in monitoring and predicting nozzle performance. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  17. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  18. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Determination of the refractive index of n+- and p-type porous Si samples

    International Nuclear Information System (INIS)

    Setzu, S.; Romestain, R.; Chamard, V.

    2004-01-01

    Photochemical etching of porous Si layers has been shown to be able to create micrometer or submicrometer-scale lateral gratings very promising for photonic applications. However, the reduced size of this lateral periodicity hinders standard measurements of refractive index variations. Therefore accurate characterizations of such gratings are usually difficult. In this paper we address this problem by reproducing on a larger scale (millimeter) the micrometer scale light-induced refractive index variations associated to the lateral periodicity. Using this procedure we perform standard X-ray and optical reflectivity measurements on our samples. One can then proceed to the determination of light-induced variations of porosity and refractive index. We present results for p-type samples, where the photo-dissolution can only be realized after the formation of the porous layer, as well as for n + -type samples, where light action can only be effective during the formation of the porous layer

  20. Preparation of gold nanoparticles and determination of their particles size via different methods

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, Muhammad; Usanase, Gisele [University of Lyon, University Lyon-1, CNRS, UMR-5007, LAGEP, F-69622 Villeurbanne (France); Oulmi, Kafia; Aberkane, Fairouz; Bendaikha, Tahar [Laboratory of Chemistry and Environmental Chemistry(LCCE), Faculty of Science, Material Science Department, University of Batna, 05000 (Algeria); Fessi, Hatem [University of Lyon, University Lyon-1, CNRS, UMR-5007, LAGEP, F-69622 Villeurbanne (France); Zine, Nadia [Institut des Sciences Analytiques (ISA), Université Lyon, Université Claude Bernard Lyon-1, UMR-5180, 5 rue de la Doua, F-69100 Villeurbanne (France); Agusti, Géraldine [University of Lyon, University Lyon-1, CNRS, UMR-5007, LAGEP, F-69622 Villeurbanne (France); Errachid, El-Salhi [Institut des Sciences Analytiques (ISA), Université Lyon, Université Claude Bernard Lyon-1, UMR-5180, 5 rue de la Doua, F-69100 Villeurbanne (France); Elaissari, Abdelhamid, E-mail: elaissari@lagep.univ-lyon1.fr [University of Lyon, University Lyon-1, CNRS, UMR-5007, LAGEP, F-69622 Villeurbanne (France)

    2016-07-15

    Graphical abstract: Preparation of gold nanoparticles via NaBH{sub 4} reduction method, and determination of their particle size, size distribution and morphology by using different techniques. - Highlights: • Gold nanoparticles were synthesized by NaBH{sub 4} reduction method. • Excess of reducing agent leads to tendency of aggregation. • The particle size, size distribution and morphology were investigated. • Particle size was determined both experimentally as well as theoretically. - Abstract: Gold nanoparticles have been used in various applications covering both electronics, biosensors, in vivo biomedical imaging and in vitro biomedical diagnosis. As a general requirement, gold nanoparticles should be prepared in large scale, easy to be functionalized by chemical compound of by specific ligands or biomolecules. In this study, gold nanoparticles were prepared by using different concentrations of reducing agent (NaBH{sub 4}) in various formulations and their effect on the particle size, size distribution and morphology was investigated. Moreover, special attention has been dedicated to comparison of particles size measured by various techniques, such as, light scattering, transmission electron microscopy, UV spectrum using standard curve and particles size calculated by using Mie theory and UV spectrum of gold nanoparticles dispersion. Particle size determined by various techniques can be correlated for monodispersed particles and excess of reducing agent leads to increase in the particle size.

  1. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    International Nuclear Information System (INIS)

    Reiser, I; Lu, Z

    2014-01-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions included two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task

  3. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  4. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  5. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  6. What about N? A methodological study of sample-size reporting in focus group studies.

    Science.gov (United States)

    Carlsen, Benedicte; Glenton, Claire

    2011-03-11

    Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these

  7. What about N? A methodological study of sample-size reporting in focus group studies

    Directory of Open Access Journals (Sweden)

    Glenton Claire

    2011-03-01

    Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method

  8. Drop size distribution measured by imaging: determination of the measurement volume by the calibration of the point spread function

    International Nuclear Information System (INIS)

    Fdida, Nicolas; Blaisot, Jean-Bernard

    2010-01-01

    Measurement of drop size distributions in a spray depends on the definition of the control volume for drop counting. For image-based techniques, this implies the definition of a depth-of-field (DOF) criterion. A sizing procedure based on an imaging model and associated with a calibration procedure is presented. Relations between image parameters and object properties are used to provide a measure of the size of the droplets, whatever the distance from the in-focus plane. A DOF criterion independent of the size of the drops and based on the determination of the width of the point spread function (PSF) is proposed. It allows to extend the measurement volume to defocused droplets and, due to the calibration of the PSF, to clearly define the depth of the measurement volume. Calibrated opaque discs, calibrated pinholes and an optical edge are used for this calibration. A comparison of the technique with a phase Doppler particle analyser and a laser diffraction granulometer is performed on an application to an industrial spray. Good agreement is found between the techniques when particular care is given to the sampling of droplets. The determination of the measurement volume is used to determine the drop concentration in the spray and the maximum drop concentration that imaging can support

  9. Determination of size and shape distributions of metal and ceramic powders

    International Nuclear Information System (INIS)

    Jovanovic, DI.

    1961-01-01

    For testing the size and shape distributions of metal and ceramic uranium oxide powders the following method for analysing the grain size of powders were developed and implemented: microscopic analysis and sedimentation method. A gravimetry absorption device was constructed for determining the specific surfaces of powders

  10. Ultra-trace determination of plutonium in marine samples using multi-collector inductively coupled plasma mass spectrometry.

    Science.gov (United States)

    Lindahl, Patric; Keith-Roach, Miranda; Worsfold, Paul; Choi, Min-Seok; Shin, Hyung-Seon; Lee, Sang-Hoon

    2010-06-25

    Sources of plutonium isotopes to the marine environment are well defined, both spatially and temporally, which makes Pu a potential tracer for oceanic processes. This paper presents the selection, optimisation and validation of a sample preparation method for the ultra-trace determination of Pu isotopes ((240)Pu and (239)Pu) in marine samples by multi-collector (MC) ICP-MS. The method was optimised for the removal of the interference from (238)U and the chemical recovery of Pu. Comparison of various separation strategies using AG1-X8, TEVA, TRU, and UTEVA resins to determine Pu in marine calcium carbonate samples is reported. A combination of anion-exchange (AG1-X8) and extraction chromatography (UTEVA/TRU) was the most suitable, with a radiochemical Pu yield of 87+/-5% and a U decontamination factor of 1.2 x 10(4). Validation of the method was accomplished by determining Pu in various IAEA certified marine reference materials. The estimated MC-ICP-MS instrumental limit of detection for (239)Pu and (240)Pu was 0.02 fg mL(-1), with an absolute limit of quantification of 0.11 fg. The proposed method allows the determination of ultra-trace Pu, at femtogram levels, in small size marine samples (e.g., 0.6-2.0 g coral or 15-20 L seawater). Finally, the analytical method was applied to determining historical records of the Pu signature in coral samples from the tropical Northwest Pacific and (239+240)Pu concentrations and (240)Pu/(239)Pu atom ratios in seawater samples as part of the 2008 GEOTRACES intercalibration exercise. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Ultra-trace determination of plutonium in marine samples using multi-collector inductively coupled plasma mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Lindahl, Patric, E-mail: patriclindahl@yahoo.com [Marine Environment Research Department, Korea Ocean Research and Development Institute, 1270 Sadong, Ansan 426-744 (Korea, Republic of); School of Geography, Earth and Environmental Sciences, University of Plymouth, Drake Circus, Plymouth PL48AA (United Kingdom); Keith-Roach, Miranda; Worsfold, Paul [School of Geography, Earth and Environmental Sciences, University of Plymouth, Drake Circus, Plymouth PL48AA (United Kingdom); Choi, Min-Seok; Shin, Hyung-Seon [Division of Earth and Environmental Science, Korea Basic Science Institute, 113 Gwahangno, Yusung-gu, Daejon 305-333 (Korea, Republic of); Lee, Sang-Hoon [Marine Geology and Geophysics Laboratory, Korea Ocean Research and Development Institute, 1270 Sadong, Ansan 426-744 (Korea, Republic of)

    2010-06-25

    Sources of plutonium isotopes to the marine environment are well defined, both spatially and temporally, which makes Pu a potential tracer for oceanic processes. This paper presents the selection, optimisation and validation of a sample preparation method for the ultra-trace determination of Pu isotopes ({sup 240}Pu and {sup 239}Pu) in marine samples by multi-collector (MC) ICP-MS. The method was optimised for the removal of the interference from {sup 238}U and the chemical recovery of Pu. Comparison of various separation strategies using AG1-X8, TEVA, TRU, and UTEVA resins to determine Pu in marine calcium carbonate samples is reported. A combination of anion-exchange (AG1-X8) and extraction chromatography (UTEVA/TRU) was the most suitable, with a radiochemical Pu yield of 87 {+-} 5% and a U decontamination factor of 1.2 x 10{sup 4}. Validation of the method was accomplished by determining Pu in various IAEA certified marine reference materials. The estimated MC-ICP-MS instrumental limit of detection for {sup 239}Pu and {sup 240}Pu was 0.02 fg mL{sup -1}, with an absolute limit of quantification of 0.11 fg. The proposed method allows the determination of ultra-trace Pu, at femtogram levels, in small size marine samples (e.g., 0.6-2.0 g coral or 15-20 L seawater). Finally, the analytical method was applied to determining historical records of the Pu signature in coral samples from the tropical Northwest Pacific and {sup 239+240}Pu concentrations and {sup 240}Pu/{sup 239}Pu atom ratios in seawater samples as part of the 2008 GEOTRACES intercalibration exercise.

  12. Ultra-trace determination of plutonium in marine samples using multi-collector inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Lindahl, Patric; Keith-Roach, Miranda; Worsfold, Paul; Choi, Min-Seok; Shin, Hyung-Seon; Lee, Sang-Hoon

    2010-01-01

    Sources of plutonium isotopes to the marine environment are well defined, both spatially and temporally, which makes Pu a potential tracer for oceanic processes. This paper presents the selection, optimisation and validation of a sample preparation method for the ultra-trace determination of Pu isotopes ( 240 Pu and 239 Pu) in marine samples by multi-collector (MC) ICP-MS. The method was optimised for the removal of the interference from 238 U and the chemical recovery of Pu. Comparison of various separation strategies using AG1-X8, TEVA, TRU, and UTEVA resins to determine Pu in marine calcium carbonate samples is reported. A combination of anion-exchange (AG1-X8) and extraction chromatography (UTEVA/TRU) was the most suitable, with a radiochemical Pu yield of 87 ± 5% and a U decontamination factor of 1.2 x 10 4 . Validation of the method was accomplished by determining Pu in various IAEA certified marine reference materials. The estimated MC-ICP-MS instrumental limit of detection for 239 Pu and 240 Pu was 0.02 fg mL -1 , with an absolute limit of quantification of 0.11 fg. The proposed method allows the determination of ultra-trace Pu, at femtogram levels, in small size marine samples (e.g., 0.6-2.0 g coral or 15-20 L seawater). Finally, the analytical method was applied to determining historical records of the Pu signature in coral samples from the tropical Northwest Pacific and 239+240 Pu concentrations and 240 Pu/ 239 Pu atom ratios in seawater samples as part of the 2008 GEOTRACES intercalibration exercise.

  13. Optimum strata boundaries and sample sizes in health surveys using auxiliary variables.

    Science.gov (United States)

    Reddy, Karuna Garan; Khan, Mohammad G M; Khan, Sabiha

    2018-01-01

    Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results.

  14. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  15. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  16. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  17. Size determination of an equilibrium enzymic system by radiation inactivation

    International Nuclear Information System (INIS)

    Simon, P.; Swillens, S.; Dumont, J.E.

    1982-01-01

    Radiation inactivation of complex enzymic systems is currently used to determine the enzyme size and the molecular organization of the components in the system. An equilibrium model was simulated describing the regulation of enzyme activity by association of the enzyme with a regulatory unit. It is assumed that, after irradiation, the system equilibrates before the enzyme activity is assayed. The theoretical results show that the target-size analysis of these numerical data leads to a bad estimate of the enzyme size. Moreover, some implicit assumptions such as the transfer of radiation energy between non-covalently bound molecules should be verified before interpretation of target-size analysis. It is demonstrated that the apparent target size depends on the parameters of the system, namely the size and the concentration of the components, the equilibrium constant, the relative activities of free enzyme and enzymic complex, the existence of energy transfer, and the distribution of the components between free and bound forms during the irradiation. (author)

  18. Pore Size Distribution in Chicken Eggs as Determined by Mercury Porosimetry

    Directory of Open Access Journals (Sweden)

    La Scala Jr N

    2000-01-01

    Full Text Available In this study we investigated the application of mercury porosimetry technique into the determination of porosity features in 28 week old hen eggshells. Our results have shown that the majority of the pores have sizes between 1 to 10 mu m in the eggshells studied. By applying mercury porosimetry technique we were able to describe the porosity features better, by determining a pore size distribution in the eggshells. Here, we introduce mercury porosimetry technique as a new routine technique applied into the study of eggshells.

  19. Determining the sample size for co-dominant molecular marker-assisted linkage detection for a monogenic qualitative trait by controlling the type-I and type-II errors in a segregating F2 population.

    Science.gov (United States)

    Hühn, M; Piepho, H P

    2003-03-01

    Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.

  20. Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples

    Directory of Open Access Journals (Sweden)

    Inés Lozano-Ramos

    2015-05-01

    Full Text Available Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9. The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm–Horsfall protein were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.

  1. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Procedures for sampling and sample reduction within quality assurance systems for solid biofuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The objective of this experimental study on sampling was to determine the size and number of samples of biofuels required (taken at two sampling points in each case) and to compare two methods of sampling. The first objective of the sample-reduction exercise was to compare the reliability of various sampling methods, and the second objective was to measure the variations introduced as a result of reducing the sample size to form suitable test portions. The materials studied were sawdust, wood chips, wood pellets and bales of straw, and these were analysed for moisture, ash, particle size and chloride. The sampling procedures are described. The study was conducted in Scandinavia. The results of the study were presented in Leipzig in October 2004. The work was carried out as part of the UK's DTI Technology Programme: New and Renewable Energy.

  3. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  4. AFM topographies of densely packed nanoparticles: a quick way to determine the lateral size distribution by autocorrelation function analysis

    International Nuclear Information System (INIS)

    Fekete, L.; Kůsová, K.; Petrák, V.; Kratochvílová, I.

    2012-01-01

    The distribution of sizes is one of the basic characteristics of nanoparticles. Here, we propose a novel way to determine the lateral distribution of sizes from AFM topographies. Our algorithm is based on the autocorrelation function and can be applied both on topographies containing spatially separated and densely packed nanoparticles as well as on topographies of polycrystalline films. As no manual treatment is required, this algorithm can be easily automatable for batch processing. The algorithm works in principle with any kind of spatially mapped information (AFM current maps, optical microscope images, etc.), and as such has no size limitations. However, in the case of AFM topographies, the tip/sample convolution effects will be the factor limiting the smallest size to which the algorithm is applicable. Here, we demonstrate the usefulness of this algorithm on objects with sizes ranging between 20 nm and 1.5 μm.

  5. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  6. System for determining sizes of biological macromolecules

    International Nuclear Information System (INIS)

    Nelson, R.M.; Danby, P.C.

    1987-01-01

    An electrophoresis system for determining the sizes of radiolabelled biological macromolecules is described. It comprises a cell containing an electrophoresis gel and having at least one lane, a voltage source connected across the gel for effecting the movement of macromolecules in the lane, a detector fixed relative to the moving molecules for generating electrical pulses responsive to signals emitted by the radiolabelled molecules; a pulse processor for counting the pulse rate, and a computational device for comparing the pulse rate to a predetermined value. (author)

  7. Practical limitations of single particle ICP-MS in the determination of nanoparticle size distributions and dissolution: case of rare earth oxides.

    Science.gov (United States)

    Fréchette-Viens, Laurie; Hadioui, Madjid; Wilkinson, Kevin J

    2017-01-15

    The applicability of single particle ICP-MS (SP-ICP-MS) for the analysis of nanoparticle size distributions and the determination of particle numbers was evaluated using the rare earth oxide, La 2 O 3 , as a model particle. The composition of the storage containers, as well as the ICP-MS sample introduction system were found to significantly impact SP-ICP-MS analysis. While La 2 O 3 nanoparticles (La 2 O 3 NP) did not appear to interact strongly with sample containers, adsorptive losses of La 3+ (over 24h) were substantial (>72%) for fluorinated ethylene propylene bottles as opposed to polypropylene (size distributions. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  9. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  10. Interpreting meta-analysis according to the adequacy of sample size. An example using isoniazid chemoprophylaxis for tuberculosis in purified protein derivative negative HIV-infected individuals

    Directory of Open Access Journals (Sweden)

    Kristian Thorlund

    2010-04-01

    Full Text Available Kristian Thorlund1,2, Aranka Anema3, Edward Mills41Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada; 2The Copenhagen Trial Unit, Centre for Clinical Intervention Research, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark; 3British Columbia Centre for Excellence in HIV/AIDS, University of British Columbia, Vancouver, British Columbia, Canada; 4Faculty of Health Sciences, University of Ottawa, Ottawa, Ontario, CanadaObjective: To illustrate the utility of statistical monitoring boundaries in meta-analysis, and provide a framework in which meta-analysis can be interpreted according to the adequacy of sample size. To propose a simple method for determining how many patients need to be randomized in a future trial before a meta-analysis can be deemed conclusive.Study design and setting: Prospective meta-analysis of randomized clinical trials (RCTs that evaluated the effectiveness of isoniazid chemoprophylaxis versus placebo for preventing the incidence of tuberculosis disease among human immunodeficiency virus (HIV-positive individuals testing purified protein derivative negative. Assessment of meta-analysis precision using trial sequential analysis (TSA with LanDeMets monitoring boundaries. Sample size determination for a future trials to make the meta-analysis conclusive according to the thresholds set by the monitoring boundaries.Results: The meta-analysis included nine trials comprising 2,911 trial participants and yielded a relative risk of 0.74 (95% CI, 0.53–1.04, P = 0.082, I2 = 0%. To deem the meta-analysis conclusive according to the thresholds set by the monitoring boundaries, a future RCT would need to randomize 3,800 participants.Conclusion: Statistical monitoring boundaries provide a framework for interpreting meta-analysis according to the adequacy of sample size and project the required sample size for a future RCT to make a meta-analysis conclusive

  11. Polydisperse-particle-size-distribution function determined from intensity profile of angularly scattered light

    International Nuclear Information System (INIS)

    Alger, T.W.

    1979-01-01

    A new method for determining the particle-size-distribution function of a polydispersion of spherical particles is presented. The inversion technique for the particle-size-distribution function is based upon matching the measured intensity profile of angularly scattered light with a summation of the intensity contributions of a series of appropriately spaced, narrowband, size-distribution functions. A numerical optimization technique is used to determine the strengths of the individual bands that yield the best agreement with the measured scattered-light-intensity profile. Because Mie theory is used, the method is applicable to spherical particles of all sizes. Several numerical examples demonstrate the application of this inversion method

  12. Determination of samples with TSP size at PLTU Pacitan, Jawa Timur have been done

    International Nuclear Information System (INIS)

    Rusmanto, Tri; Mulyono; Irianto, Bambang

    2013-01-01

    Sampling is done using equipment High Volume Air Sampler (HVAS) and analysis using gamma spectrometer. Sampling at 3 locations, each location of the sampling carried out 24 hours, air samples on filter conditioned at room temperature, weighed to a contained weight, counting for 24 hours with gamma spectrometer. The result of qualitative and quantitative analysis of filter TSP was contained of locations I Ra-226 = 0,000888 Bq/m 3 , Pb-212 = 0,000356 Bq/m 3 , Pb-214 = 0,000859 Bq/m 3 , Bi-214 = 0,000712 Bq/m 3 , Ac-228 = 0,004447 Bq/m 3 , K-40 = 0,035454 Bq/m 3 ) , Locations II Ra-226 = 0,00113 Bq/m 3 , Pb-212 = 0,00079 Bq/m 3 , Pb-214 = 0,001351 Bq/m 3 , Bi-214 = 0,000433 Bq/m 3 , Ac-228 = 0,007138 Bq/m 3 , K-40 = 0,018532 Bq/m 3 , Locations III Ra-226 = 0,001424 Bq/m 3 , Pb-212 = 0,000208 Bq/m 3 , Pb-214 = 000052 Bq/m 3 , Bi-214 = 0,001408 Bq/m 3 , Ac-228 = 0,008362 Bq/m 3 , K-40 = 0,020536 Bq/m 3 . Radionuclides activity was all still below quality of air enabled by BAPETEN. Become the activities of ambient air of PLTU area still be peaceful enough as settlement area. (author)

  13. Rapid determination of actinides in seawater samples

    International Nuclear Information System (INIS)

    Maxwell, S.L.; Culligan, B.K.; Hutchison, J.B.; Utsey, R.C.; McAlister, D.R.

    2014-01-01

    A new rapid method for the determination of actinides in seawater samples has been developed at the Savannah River National Laboratory. The actinides can be measured by alpha spectrometry or inductively-coupled plasma mass spectrometry. The new method employs novel pre-concentration steps to collect the actinide isotopes quickly from 80 L or more of seawater. Actinides are co-precipitated using an iron hydroxide co-precipitation step enhanced with Ti +3 reductant, followed by lanthanum fluoride co-precipitation. Stacked TEVA Resin and TRU Resin cartridges are used to rapidly separate Pu, U, and Np isotopes from seawater samples. TEVA Resin and DGA Resin were used to separate and measure Pu, Am and Cm isotopes in seawater volumes up to 80 L. This robust method is ideal for emergency seawater samples following a radiological incident. It can also be used, however, for the routine analysis of seawater samples for oceanographic studies to enhance efficiency and productivity. In contrast, many current methods to determine actinides in seawater can take 1-2 weeks and provide chemical yields of ∼30-60 %. This new sample preparation method can be performed in 4-8 h with tracer yields of ∼85-95 %. By employing a rapid, robust sample preparation method with high chemical yields, less seawater is needed to achieve lower or comparable detection limits for actinide isotopes with less time and effort. (author)

  14. The relative importance of perceptual and memory sampling processes in determining the time course of absolute identification.

    Science.gov (United States)

    Guest, Duncan; Kent, Christopher; Adelman, James S

    2018-04-01

    In absolute identification, the extended generalized context model (EGCM; Kent & Lamberts, 2005, 2016) proposes that perceptual processing determines systematic response time (RT) variability; all other models of RT emphasize response selection processes. In the EGCM-RT the bow effect in RTs (longer responses for stimuli in the middle of the range) occurs because these middle stimuli are less isolated, and as perceptual information is accumulated, the evidence supporting a correct response grows more slowly than for stimuli at the ends of the range. More perceptual information is therefore accumulated in order to increase certainty in response for middle stimuli, lengthening RT. According to the model reducing perceptual sampling time should reduce the size of the bow effect in RT. We tested this hypothesis in 2 pitch identification experiments. Experiment 1 found no effect of stimulus duration on the size of the RT bow. Experiment 2 used multiple short stimulus durations as well as manipulating set size and stimulus spacing. Contrary to EGCM-RT predictions, the bow effect on RTs was large for even very short durations. A new version of the EGCM-RT could only capture this, alongside the effect of stimulus duration on accuracy, by including both a perceptual and a memory sampling process. A modified version of the selective attention, mapping, and ballistic accumulator model (Brown, Marley, Donkin, & Heathcote, 2008) could also capture the data, by assuming psychophysical noise diminishes with increased exposure duration. This modeling suggests systematic variability in RT in absolute identification is largely determined by memory sampling and response selection processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  16. A Heuristic Approach for Determining Lot Sizes and Schedules Using Power-of-Two Policy

    Directory of Open Access Journals (Sweden)

    Esra Ekinci

    2007-01-01

    Full Text Available We consider the problem of determining realistic and easy-to-schedule lot sizes in a multiproduct, multistage manufacturing environment. We concentrate on a specific type of production, namely, flow shop type production. The model developed consists of two parts, lot sizing problem and scheduling problem. In lot sizing problem, we employ binary integer programming and determine reorder intervals for each product using power-of-two policy. In the second part, using the results obtained of the lot sizing problem, we employ mixed integer programming to determine schedules for a multiproduct, multistage case with multiple machines in each stage. Finally, we provide a numerical example and compare the results with similar methods found in practice.

  17. Nuclear Criticality Calculation for Determining the Bach Size in a Pyroprocessing Facility

    International Nuclear Information System (INIS)

    Ko, Won Il; Lee, Ho Hee; Chang, Hong Rae; Song, Dae Yong; Kwon, Eun Ha; Jung, Chang Jun; Yoon, Suk Kyun

    2009-01-01

    The criticality analysis in a pyroprocessing facility is very important element for the R and D and the facility design in terms of the determination of batch size of the sub-processes as well as facility safety. Particularly, the determining the batch size is essential at the beginning stage of the R and D. In this report, the criticality analysis was carried out for the subprocesses such as voloxidation, electrolytic reduction, electrorefining and electrowinning process in order to estimate the maximum batch size of each process by using Monte Carlo code (MCNP4/C2). On the whole, the criticality problem could not give a big effect on the batch sizes in the voloxidation, electrolytic reduction and electrorefining. However, it was resulted that permissible amount of nuclear material to prevent the criticality accident in the electrowinning process was about 10kgHM

  18. Nuclear Criticality Calculation for Determining the Bach Size in a Pyroprocessing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Lee, Ho Hee; Chang, Hong Rae; Song, Dae Yong; Kwon, Eun Ha; Jung, Chang Jun; Yoon, Suk Kyun [KAERI, Daejeon (Korea, Republic of)

    2009-01-15

    The criticality analysis in a pyroprocessing facility is very important element for the R and D and the facility design in terms of the determination of batch size of the sub-processes as well as facility safety. Particularly, the determining the batch size is essential at the beginning stage of the R and D. In this report, the criticality analysis was carried out for the subprocesses such as voloxidation, electrolytic reduction, electrorefining and electrowinning process in order to estimate the maximum batch size of each process by using Monte Carlo code (MCNP4/C2). On the whole, the criticality problem could not give a big effect on the batch sizes in the voloxidation, electrolytic reduction and electrorefining. However, it was resulted that permissible amount of nuclear material to prevent the criticality accident in the electrowinning process was about 10kgHM

  19. Sample Size Determination for Rasch Model Tests

    Science.gov (United States)

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  20. Simple and rapid spectrophotometric determination of trace titanium (IV) enriched by nanometer size zirconium dioxide in natural water

    International Nuclear Information System (INIS)

    Zheng Fengying; Li Shunxing; Lin Luxiu; Cheng Liqing

    2009-01-01

    A novel method for preconcentration of Ti(IV) with nanometer size ZrO 2 and determination by spectrophotometry has been developed. Ti(IV) was selectively adsorbed on 300 mg ZrO 2 from 500 mL solution at pH 6.0, then eluted by 5 mL 11.3 mol L -1 HF. The eluent added was diantipyrylmethane (DAPM, as chromogenic reagent) and ascorbic acid (as masking agent), used for the analysis of Ti(IV) by measuring the absorbance at 390 nm with spectrophotometry, based on the chromogenic reaction between the Ti(IV) and DAPM. This method gave a concentration enhancement of 100 for 500 mL sample, eliminated the sizable interferences on direct determination with spectrophotometry. Detection limit (3σ, n = 11) of 0.1 μg L -1 was obtained. The method was applied to determine the concentration of Ti(IV) in river water and seawater and the analytical recoveries of Ti(IV) added to samples were 97.6-101.3%.

  1. DNA-based hair sampling to identify road crossings and estimate population size of black bears in Great Dismal Swamp National Wildlife Refuge, Virginia

    OpenAIRE

    Wills, Johnny

    2008-01-01

    The planned widening of U.S. Highway 17 along the east boundary of Great Dismal Swamp National Wildlife Refuge (GDSNWR) and a lack of knowledge about the refugeâ s bear population created the need to identify potential sites for wildlife crossings and estimate the size of the refugeâ s bear population. I collected black bear hair in order to collect DNA samples to estimate population size, density, and sex ratio, and determine road crossing locations for black bears (Ursus americanus) in G...

  2. Determinants of polyp Size in patients undergoing screening colonoscopy

    Directory of Open Access Journals (Sweden)

    Maisonneuve Patrick

    2011-09-01

    Full Text Available Abstract Background Pre-existing polyps, especially large polyps, are known to be the major source for colorectal cancer, but there is limited available information about factors that are associated with polyp size and polyp growth. We aim to determine factors associated with polyp size in different age groups. Methods Colonoscopy data were prospectively collected from 67 adult gastrointestinal practice sites in the United States between 2002 and 2007 using a computer-generated endoscopic report form. Data were transmitted to and stored in a central data repository, where all asymptomatic white (n = 78352 and black (n = 4289 patients who had a polyp finding on screening colonoscopy were identified. Univariate and multivariate analysis of age, gender, performance site, race, polyp location, number of polyps, and family history as risk factors associated with the size of the largest polyp detected at colonoscopy. Results In both genders, size of the largest polyp increased progressively with age in all age groups (P P Conclusions In both genders there is a significant increase in polyp size detected during screening colonoscopy with increasing age. Important additional risk factors associated with increasing polyp size are gender, race, polyp location, and number of polyps, with polyp multiplicity being the strongest risk factor. Previous family history of bowel cancer was not a risk factor.

  3. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  4. Uranium determination in soil samples using Eichrom resins

    International Nuclear Information System (INIS)

    Marabini, S.; Serdeiro, Nelidad H.

    2003-01-01

    Traditionally, the radiochemical methods for uranium activity determination in soil samples by alpha spectrometry, use some techniques like solvent extraction, precipitation and ion exchange in the separation and purification stages. In the last years, some new materials have been developed for using in extraction chromatography, specific for actinides determinations. In the present method the long and tedious stages were eliminated, and the reagents consumption and concentration were minimised. This new procedure was applied to soils since it is one of the most complex matrices. In order to reduce time and chemical reagents, the soil samples up to 0,5 g were leached with nitric, hydrofluoric and perchloric acids in hermetic sealed recipients of Teflon at 150 C degrees during 5 hours. UTEVA Eichrom resin was used for uranium separation and purification. The uranium activity concentration was determined by alpha spectrometry. Several standard samples were analysed and the results are presented. (author)

  5. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  6. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  7. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  8. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  9. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  10. AAS determination of total mercury content in environmental samples

    International Nuclear Information System (INIS)

    Moskalova, M.; Zemberyova, M.

    1997-01-01

    Two methods for determination of total mercury content in environmental samples soils, and sediments, were compared. Dissolution procedure of soils, sediments, and biological material under elevated pressure followed by determination of mercury by cold vapour atomic absorption spectrometry using a MHS-1 system and direct total mercury determination without any chemical pretreatment from soil samples using a Trace Mercury Analyzer TMA-254 were compared. TMA-254 was also applied for the determination of mercury in various further standard reference materials. Good agreement with certified values of environmental reference materials was obtained. (authors)

  11. Asymptotic size determines species abundance in the marine size spectrum

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Beyer, Jan

    2006-01-01

    The majority of higher organisms in the marine environment display indeterminate growth; that is, they continue to grow throughout their life, limited by an asymptotic size. We derive the abundance of species as a function of their asymptotic size. The derivation is based on size-spectrum theory......, where population structure is derived from physiology and simple arguments regarding the predator-prey interaction. Using a hypothesis of constant satiation, which states that the average degree of satiation is independent of the size of an organism, the number of individuals with a given size is found...... to be proportional to the weight raised to the power -2.05, independent of the predator/prey size ratio. This is the first time the spectrum exponent has been derived solely on the basis of processes at the individual level. The theory furthermore predicts that the parameters in the von Bertalanffy growth function...

  12. Size determinations of colloidal fat emulsions

    DEFF Research Database (Denmark)

    Kuntsche, Judith; Klaus, Katrin; Steiniger, Frank

    2009-01-01

    Size and size distributions of colloidal dispersions are of crucial importance for their performance and safety. In the present study, commercially available fat emulsions (Lipofundin N, Lipofundin MCT and Lipidem) were analyzed by photon correlation spectroscopy, laser diffraction with adequate...... was checked with mixtures of monodisperse polystyrene nanospheres. In addition, the ultrastructure of Lipofundin N and Lipofundin MCT was investigated by cryo-electron microscopy. All different particle sizing methods gave different mean sizes and size distributions but overall, results were in reasonable...... agreement. By all methods, a larger mean droplet size (between 350 and 400 nm) as well as a broader distribution was measured for Lipofundin N compared to Lipofundin MCT and Lipidem (mean droplet size between about 280 and 320 nm). Size distributions of Lipofundin MCT and Lipidem were very similar...

  13. Study of the size effect by accurately determining the crystal parameters

    International Nuclear Information System (INIS)

    Seguin, Remy

    1973-01-01

    The size factor η = da/adC was measured by comparing the variations in the crystal parameter as a function of the concentration, samples of Al of various degrees of purity and Al - V and Al - Cu alloys with concentrations of less than 1 000 ppm being used. The results confirm the experimental results obtained with alloys supersaturated by ultra-rapid tempering but are not consistent with theoretical values, which appear to be too large for the case of transition elements in solution in Al. The parameter was determined from Kossel diagrams obtained using an electron probe microanalyzer. The measurement methods were developed and generalized by plotting curves representing the variation of the parameter as a function of temperature between 20 and 60 deg. C. Values were obtained for the parameter at given temperatures (± 0.1 deg. C) with an accuracy of Δa/a ≅ 8.10 -6 . (author) [fr

  14. Effect of the grain size of the soil on the measured activity and variation in activity in surface and subsurface soil samples

    International Nuclear Information System (INIS)

    Sulaiti, H.A.; Rega, P.H.; Bradley, D.; Dahan, N.A.; Mugren, K.A.; Dosari, M.A.

    2014-01-01

    Correlation between grain size and activity concentrations of soils and concentrations of various radionuclides in surface and subsurface soils has been measured for samples taken in the State of Qatar by gamma-spectroscopy using a high purity germanium detector. From the obtained gamma-ray spectra, the activity concentrations of the 238U (226Ra) and /sup 232/ Th (/sup 228/ Ac) natural decay series, the long-lived naturally occurring radionuclide 40 K and the fission product radionuclide 137CS have been determined. Gamma dose rate, radium equivalent, radiation hazard index and annual effective dose rates have also been estimated from these data. In order to observe the effect of grain size on the radioactivity of soil, three grain sizes were used i.e., smaller than 0.5 mm; smaller than 1 mm and greater than 0.5 mm; and smaller than 2 mm and greater than 1 mm. The weighted activity concentrations of the 238U series nuclides in 0.5-2 mm grain size of sample numbers was found to vary from 2.5:f:0.2 to 28.5+-0.5 Bq/kg, whereas, the weighted activity concentration of 4 degree K varied from 21+-4 to 188+-10 Bq/kg. The weighted activity concentrations of 238U series and 4 degree K have been found to be higher in the finest grain size. However, for the 232Th series, the activity concentrations in the 1-2 mm grain size of one sample were found to be higher than in the 0.5-1 mm grain size. In the study of surface and subsurface soil samples, the activity concentration levels of 238 U series have been found to range from 15.9+-0.3 to 24.1+-0.9 Bq/kg, in the surface soil samples (0-5 cm) and 14.5+-0.3 to 23.6+-0.5 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 232Th series have been found to lie in the range 5.7+-0.2 to 13.7+-0.5 Bq/kg, in the surface soil samples (0-5 cm)and 4.1+-0.2 to 15.6+-0.3 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 4 degree K were in the range 150+-8 to 290+-17 Bq/kg, in the surface

  15. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  16. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  17. Synthesis, optical characterization, and size distribution determination by curve resolution methods of water-soluble CdSe quantum dots

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Calink Indiara do Livramento; Carvalho, Melissa Souza; Raphael, Ellen; Ferrari, Jefferson Luis; Schiavon, Marco Antonio, E-mail: schiavon@ufsj.edu.br [Universidade Federal de Sao Joao del-Rei (UFSJ), MG (Brazil). Grupo de Pesquisa em Quimica de Materiais; Dantas, Clecio [Universidade Estadual do Maranhao (LQCINMETRIA/UEMA), Caxias, MA (Brazil). Lab. de Quimica Computacional Inorganica e Quimiometria

    2016-11-15

    In this work a colloidal approach to synthesize water-soluble CdSe quantum dots (QDs) bearing a surface ligand, such as thioglycolic acid (TGA), 3-mercaptopropionic acid (MPA), glutathione (GSH), or thioglycerol (TGH) was applied. The synthesized material was characterized by X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), UV-visible spectroscopy (UV-Vis), and fluorescence spectroscopy (PL). Additionally, a comparative study of the optical properties of different CdSe QDs was performed, demonstrating how the surface ligand affected crystal growth. The particles sizes were calculated from a polynomial function that correlates the particle size with the maximum fluorescence position. Curve resolution methods (EFA and MCR-ALS) were employed to decompose a series of fluorescence spectra to investigate the CdSe QDs size distribution and determine the number of fraction with different particle size. The results for the MPA-capped CdSe sample showed only two main fraction with different particle sizes with maximum emission at 642 and 686 nm. The calculated diameters from these maximum emission were, respectively, 2.74 and 3.05 nm. (author)

  18. Determination of copper in powdered chocolate samples by slurry-sampling flame atomic-absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Walter N.L. dos; Silva, Erik G.P. da; Fernandes, Marcelo S.; Araujo, Rennan G.O.; Costa, Anto' ' enio C.S.; Ferreira, Sergio L.C. [Nucleo de Excelencia em Quimica Analitica da Bahia, Universidade Federal da Bahia, Instituto de Quimica, Salvador, Bahia (Brazil); Vale, M.G.R. [Instituto de Quimica, Universidade Federal da Bahia do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul (Brazil)

    2005-06-01

    Chocolate is a complex sample with a high content of organic compounds and its analysis generally involves digestion procedures that might include the risk of losses and/or contamination. The determination of copper in chocolate is important because copper compounds are extensively used as fungicides in the farming of cocoa. In this paper, a slurry-sampling flame atomic-absorption spectrometric method is proposed for determination of copper in powdered chocolate samples. Optimization was carried out using univariate methodology involving the variables nature and concentration of the acid solution for slurry preparation, sonication time, and sample mass. The recommended conditions include a sample mass of 0.2 g, 2.0 mol L{sup -1} hydrochloric acid solution, and a sonication time of 15 min. The calibration curve was prepared using aqueous copper standards in 2.0 mol L{sup -1} hydrochloric acid. This method allowed determination of copper in chocolate with a detection limit of 0.4 {mu}g g{sup -1} and precision, expressed as relative standard deviation (RSD), of 2.5% (n=10) for a copper content of approximately 30 {mu}g g{sup -1}, using a chocolate mass of 0.2 g. The accuracy was confirmed by analyzing the certified reference materials NIST SRM 1568a rice flour and NIES CRM 10-b rice flour. The proposed method was used for determination of copper in three powdered chocolate samples, the copper content of which varied between 26.6 and 31.5 {mu}g g{sup -1}. The results showed no significant differences with those obtained after complete digestion, using a t-test for comparison. (orig.)

  19. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  20. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  1. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  2. An overview of the main foodstuff sample preparation technologies for tetracycline residue determination.

    Science.gov (United States)

    Pérez-Rodríguez, Michael; Pellerano, Roberto Gerardo; Pezza, Leonardo; Pezza, Helena Redigolo

    2018-05-15

    Tetracyclines are widely used for both the treatment and prevention of diseases in animals as well as for the promotion of rapid animal growth and weight gain. This practice may result in trace amounts of these drugs in products of animal origin, such as milk and eggs, posing serious risks to human health. The presence of tetracycline residues in foods can lead to the transmission of antibiotic-resistant pathogenic bacteria through the food chain. In order to ensure food safety and avoid exposure to these substances, national and international regulatory agencies have established tolerance levels for authorized veterinary drugs, including tetracycline antimicrobials. In view of that, numerous sensitive and specific methods have been developed for the quantification of these compounds in different food matrices. One will note, however, that the determination of trace residues in foods such as milk and eggs often requires extensive sample extraction and preparation prior to conducting instrumental analysis. Sample pretreatment is usually the most complicated step in the analytical process and covers both cleaning and pre-concentration. Optimal sample preparation can reduce analysis time and sources of error, enhance sensitivity, apart from enabling unequivocal identification, confirmation and quantification of target analytes. The development and implementation of more environmentally friendly analytical procedures, which involve the use of less hazardous solvents and smaller sample sizes compared to traditional methods, is a rapidly increasing trend in analytical chemistry. This review seeks to provide an updated overview of the main trends in sample preparation for the determination of tetracycline residues in foodstuffs. The applicability of several extraction and clean-up techniques employed in the analysis of foodstuffs, especially milk and egg samples, is also thoroughly discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  4. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  5. Simultaneous determination of plutonium and uranium in environmental samples

    International Nuclear Information System (INIS)

    Jiao Shufen

    1993-01-01

    Plutonium and uranium in a plant sample ash was simultaneously determined by using anion exchange resin columns, and concentrated hydrochloric acid and nitric acid. At the final stage of the determination of the nuclides, each of them was electrodeposited together with a little amount of molybdenum carrier onto a stainless steel plate and measured by α-ray spectrometer. The recoveries of uranium and plutonium from the plant samples determined by adding internal standard 236 Pu which was 100% and 63%, respectively

  6. Sr-90 determination in aqueous and soils samples

    International Nuclear Information System (INIS)

    Gonzalez Sintas, Maria F.; Cerchietti, Maria L.; Arguelles, Maria G.

    2009-01-01

    The main objective of this paper is to evaluate the method for Sr-90 determination in aqueous sample and soils. Area and Personal Dosimetry laboratory (DPA) determines the presence of Sr-90 by Liquid Scintillation (LSC) by applying method of the double window and corresponding adjustments. Calibration is performed by standard solutions of 90 Sr/ 90 Y, where spectral 90 Sr and 90 Y zones are optimized. The initial treatment of the liquid samples includes the concentration for evaporation, while the solid ones dissolve for microwave and acidic digestion. The separation of the analyte involves a selective chromatographic extraction. An average efficiency for 90 Sr of 77 ± 1 % was obtained; the factor a/b was 0,85 ± 0,01 and recovery of 82 ± 8 %. The resultant MAD was 0,10 Bq/L in aqueous samples and 0,10 Bq/g in solid samples. (author)

  7. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  8. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  9. Mesh size in Lichtenstein repair: a systematic review and meta-analysis to determine the importance of mesh size.

    Science.gov (United States)

    Seker, D; Oztuna, D; Kulacoglu, H; Genc, Y; Akcil, M

    2013-04-01

    Small mesh size has been recognized as one of the factors responsible for recurrence after Lichtenstein hernia repair due to insufficient coverage or mesh shrinkage. The Lichtenstein Hernia Institute recommends a 7 × 15 cm mesh that can be trimmed up to 2 cm from the lateral side. We performed a systematic review to determine surgeons' mesh size preference for the Lichtenstein hernia repair and made a meta-analysis to determine the effect of mesh size, mesh type, and length of follow-up time on recurrence. Two medical databases, PubMed and ISI Web of Science, were systematically searched using the key word "Lichtenstein repair." All full text papers were selected. Publications mentioning mesh size were brought for further analysis. A mesh surface area of 90 cm(2) was accepted as the threshold for defining the mesh as small or large. Also, a subgroup analysis for recurrence pooled proportion according to the mesh size, mesh type, and follow-up period was done. In total, 514 papers were obtained. There were no prospective or retrospective clinical studies comparing mesh size and clinical outcome. A total of 141 papers were duplicated in both databases. As a result, 373 papers were obtained. The full text was available in over 95 % of papers. Only 41 (11.2 %) papers discussed mesh size. In 29 studies, a mesh larger than 90 cm(2) was used. The most frequently preferred commercial mesh size was 7.5 × 15 cm. No papers mentioned the size of the mesh after trimming. There was no information about the relationship between mesh size and patient BMI. The pooled proportion in recurrence for small meshes was 0.0019 (95 % confidence interval: 0.007-0.0036), favoring large meshes to decrease the chance of recurrence. Recurrence becomes more marked when follow-up period is longer than 1 year (p < 0.001). Heavy meshes also decreased recurrence (p = 0.015). This systematic review demonstrates that the size of the mesh used in Lichtenstein hernia repair is rarely

  10. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  11. Flow injection determination of lead and cadmium in hair samples from workers exposed to welding fumes

    International Nuclear Information System (INIS)

    Cespon-Romero, R.M.; Yebra-Biurrun, M.C.

    2007-01-01

    A flow injection procedure involving continuous acid leaching for lead and cadmium determination in hair samples of persons in permanent contact with a polluted workplace environment by flame atomic absorption spectrometry is proposed. Variables such as sonication time, nature and concentration of the acid solution used as leaching solution, leaching temperature, flow-rate of the continuous manifold, leaching solution volume and hair particle size were simultaneously studied by applying a Plackett-Burman design approach. Results showed that nitric acid concentration (leaching solution), leaching temperature and sonication time were statistically significant variables (confidence interval of 95%). These last two variables were finally optimised by using a central composite design. The proposed procedure allowed the determination of cadmium and lead with limits of detection 0.1 and 1.0 μg g -1 , respectively. The accuracy of the developed procedure was evaluated by the analysis of a certified reference material (CRM 397, human hair, from the BCR). The proposed method was applied with satisfactory results to the determination of Cd and Pb in human hair samples of workers exposed to welding fumes

  12. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  13. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  14. Does mindfulness matter? Everyday mindfulness, mindful eating and self-reported serving size of energy dense foods among a sample of South Australian adults.

    Science.gov (United States)

    Beshara, Monica; Hutchinson, Amanda D; Wilson, Carlene

    2013-08-01

    Serving size is a modifiable determinant of energy consumption, and an important factor to address in the prevention and treatment of obesity. The present study tested an hypothesised negative association between individuals' everyday mindfulness and self-reported serving size of energy dense foods. The mediating role of mindful eating was also explored. A community sample of 171 South Australian adults completed self-report measures of everyday mindfulness and mindful eating. The dependent measure was participants' self-reported average serving size of energy dense foods consumed in the preceding week. Participants who reported higher levels of everyday mindfulness were more mindful eaters (r=0.41, pMindful eating fully mediated the negative association between everyday mindfulness and serving size. The domains of mindful eating most relevant to serving size included emotional and disinhibited eating. Results suggest that mindful eating may have a greater influence on serving size than daily mindfulness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  16. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    Science.gov (United States)

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  18. The Sex Determination Gene transformer Regulates Male-Female Differences in Drosophila Body Size.

    Science.gov (United States)

    Rideout, Elizabeth J; Narsaiya, Marcus S; Grewal, Savraj S

    2015-12-01

    Almost all animals show sex differences in body size. For example, in Drosophila, females are larger than males. Although Drosophila is widely used as a model to study growth, the mechanisms underlying this male-female difference in size remain unclear. Here, we describe a novel role for the sex determination gene transformer (tra) in promoting female body growth. Normally, Tra is expressed only in females. We find that loss of Tra in female larvae decreases body size, while ectopic Tra expression in males increases body size. Although we find that Tra exerts autonomous effects on cell size, we also discovered that Tra expression in the fat body augments female body size in a non cell-autonomous manner. These effects of Tra do not require its only known targets doublesex and fruitless. Instead, Tra expression in the female fat body promotes growth by stimulating the secretion of insulin-like peptides from insulin producing cells in the brain. Our data suggest a model of sex-specific growth in which body size is regulated by a previously unrecognized branch of the sex determination pathway, and identify Tra as a novel link between sex and the conserved insulin signaling pathway.

  19. The Sex Determination Gene transformer Regulates Male-Female Differences in Drosophila Body Size.

    Directory of Open Access Journals (Sweden)

    Elizabeth J Rideout

    2015-12-01

    Full Text Available Almost all animals show sex differences in body size. For example, in Drosophila, females are larger than males. Although Drosophila is widely used as a model to study growth, the mechanisms underlying this male-female difference in size remain unclear. Here, we describe a novel role for the sex determination gene transformer (tra in promoting female body growth. Normally, Tra is expressed only in females. We find that loss of Tra in female larvae decreases body size, while ectopic Tra expression in males increases body size. Although we find that Tra exerts autonomous effects on cell size, we also discovered that Tra expression in the fat body augments female body size in a non cell-autonomous manner. These effects of Tra do not require its only known targets doublesex and fruitless. Instead, Tra expression in the female fat body promotes growth by stimulating the secretion of insulin-like peptides from insulin producing cells in the brain. Our data suggest a model of sex-specific growth in which body size is regulated by a previously unrecognized branch of the sex determination pathway, and identify Tra as a novel link between sex and the conserved insulin signaling pathway.

  20. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A statistical rationale for establishing process quality control limits using fixed sample size, for critical current verification of SSC superconducting wire

    International Nuclear Information System (INIS)

    Pollock, D.A.; Brown, G.; Capone, D.W. II; Christopherson, D.; Seuntjens, J.M.; Woltz, J.

    1992-01-01

    This work has demonstrated the statistical concepts behind the XBAR R method for determining sample limits to verify billet I c performance and process uniformity. Using a preliminary population estimate for μ and σ from a stable production lot of only 5 billets, we have shown that reasonable sensitivity to systematic process drift and random within billet variation may be achieved, by using per billet subgroup sizes of moderate proportions. The effects of subgroup size (n) and sampling risk (α and β) on the calculated control limits have been shown to be important factors that need to be carefully considered when selecting an actual number of measurements to be used per billet, for each supplier process. Given the present method of testing in which individual wire samples are ramped to I c only once, with measurement uncertainty due to repeatability and reproducibility (typically > 1.4%), large subgroups (i.e. >30 per billet) appear to be unnecessary, except as an inspection tool to confirm wire process history for each spool. The introduction of the XBAR R method or a similar Statistical Quality Control procedure is recommend for use in the superconducing wire production program, particularly when the program transitions from requiring tests for all pieces of wire to sampling each production unit

  2. Determination of the particle size distribution of aerosols by means of a diffusion battery

    International Nuclear Information System (INIS)

    Maigne, J.P.

    1978-09-01

    The different methods allowing to determine the particle size distribution of aerosols by means of diffusion batteries are described. To that purpose, a new method for the processing of experimental data (percentages of particles trapped by the battery vs flow rate) was developed on the basis of calculation principles which are described and assessed. This method was first tested by numerical simulation from a priori particle size distributions and then verified experimentally using a fine uranine aerosol whose particle size distribution as determined by our method was compared with the distribution previously obtained by electron microscopy. The method can be applied to the determination of particle size distribution spectra of fine aerosols produced by 'radiolysis' of atmospheric gaseous impurities. Two other applications concern the detection threshold of the condensation nuclei counter and the 'critical' radii of 'radiolysis' particles [fr

  3. Pore size determination using normalized J-function for different hydraulic flow units

    Directory of Open Access Journals (Sweden)

    Ali Abedini

    2015-06-01

    Full Text Available Pore size determination of hydrocarbon reservoirs is one of the main challenging areas in reservoir studies. Precise estimation of this parameter leads to enhance the reservoir simulation, process evaluation, and further forecasting of reservoir behavior. Hence, it is of great importance to estimate the pore size of reservoir rocks with an appropriate accuracy. In the present study, a modified J-function was developed and applied to determine the pore radius in one of the hydrocarbon reservoir rocks located in the Middle East. The capillary pressure data vs. water saturation (Pc–Sw as well as routine reservoir core analysis include porosity (φ and permeability (k were used to develop the J-function. First, the normalized porosity (φz, the rock quality index (RQI, and the flow zone indicator (FZI concepts were used to categorize all data into discrete hydraulic flow units (HFU containing unique pore geometry and bedding characteristics. Thereafter, the modified J-function was used to normalize all capillary pressure curves corresponding to each of predetermined HFU. The results showed that the reservoir rock was classified into five separate rock types with the definite HFU and reservoir pore geometry. Eventually, the pore radius for each of these HFUs was determined using a developed equation obtained by normalized J-function corresponding to each HFU. The proposed equation is a function of reservoir rock characteristics including φz, FZI, lithology index (J*, and pore size distribution index (ɛ. This methodology used, the reservoir under study was classified into five discrete HFU with unique equations for permeability, normalized J-function and pore size. The proposed technique is able to apply on any reservoir to determine the pore size of the reservoir rock, specially the one with high range of heterogeneity in the reservoir rock properties.

  4. Comparison of three analytical methods to measure the size of silver nanoparticles in real environmental water and wastewater samples

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Ying-jie [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Shih, Yang-hsin, E-mail: yhs@ntu.edu.tw [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Su, Chiu-Hun [Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310, Taiwan (China); Ho, Han-Chen [Department of Anatomy, Tzu-Chi University, Hualien 970, Taiwan (China)

    2017-01-15

    Highlights: • Three emerging techniques to detect NPs in the aquatic environment were evaluated. • The pretreatment of centrifugation to decrease the interference was established. • Asymmetric flow field flow fractionation has a low recovery of NPs. • Hydrodynamic chromatography is recommended to be a low-cost screening tool. • Single particle ICPMS is recommended to accurately measure trace NPs in water. - Abstract: Due to the widespread application of engineered nanoparticles, their potential risk to ecosystems and human health is of growing concern. Silver nanoparticles (Ag NPs) are one of the most extensively produced NPs. Thus, this study aims to develop a method to detect Ag NPs in different aquatic systems. In complex media, three emerging techniques are compared, including hydrodynamic chromatography (HDC), asymmetric flow field flow fractionation (AF4) and single particle inductively coupled plasma-mass spectrometry (SP-ICP-MS). The pre-treatment procedure of centrifugation is evaluated. HDC can estimate the Ag NP sizes, which were consistent with the results obtained from DLS. AF4 can also determine the size of Ag NPs but with lower recoveries, which could result from the interactions between Ag NPs and the working membrane. For the SP-ICP-MS, both the particle size and concentrations can be determined with high Ag NP recoveries. The particle size resulting from SP-ICP-MS also corresponded to the transmission electron microscopy observation (p > 0.05). Therefore, HDC and SP-ICP-MS are recommended for environmental analysis of the samples after our established pre-treatment process. The findings of this study propose a preliminary technique to more accurately determine the Ag NPs in aquatic environments and to use this knowledge to evaluate the environmental impact of manufactured NPs.

  5. Comparison of three analytical methods to measure the size of silver nanoparticles in real environmental water and wastewater samples

    International Nuclear Information System (INIS)

    Chang, Ying-jie; Shih, Yang-hsin; Su, Chiu-Hun; Ho, Han-Chen

    2017-01-01

    Highlights: • Three emerging techniques to detect NPs in the aquatic environment were evaluated. • The pretreatment of centrifugation to decrease the interference was established. • Asymmetric flow field flow fractionation has a low recovery of NPs. • Hydrodynamic chromatography is recommended to be a low-cost screening tool. • Single particle ICPMS is recommended to accurately measure trace NPs in water. - Abstract: Due to the widespread application of engineered nanoparticles, their potential risk to ecosystems and human health is of growing concern. Silver nanoparticles (Ag NPs) are one of the most extensively produced NPs. Thus, this study aims to develop a method to detect Ag NPs in different aquatic systems. In complex media, three emerging techniques are compared, including hydrodynamic chromatography (HDC), asymmetric flow field flow fractionation (AF4) and single particle inductively coupled plasma-mass spectrometry (SP-ICP-MS). The pre-treatment procedure of centrifugation is evaluated. HDC can estimate the Ag NP sizes, which were consistent with the results obtained from DLS. AF4 can also determine the size of Ag NPs but with lower recoveries, which could result from the interactions between Ag NPs and the working membrane. For the SP-ICP-MS, both the particle size and concentrations can be determined with high Ag NP recoveries. The particle size resulting from SP-ICP-MS also corresponded to the transmission electron microscopy observation (p > 0.05). Therefore, HDC and SP-ICP-MS are recommended for environmental analysis of the samples after our established pre-treatment process. The findings of this study propose a preliminary technique to more accurately determine the Ag NPs in aquatic environments and to use this knowledge to evaluate the environmental impact of manufactured NPs.

  6. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  7. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  8. 49 CFR 26.65 - What rules govern business size determinations?

    Science.gov (United States)

    2010-10-01

    ... (including its affiliates) must be an existing small business, as defined by Small Business Administration... 49 Transportation 1 2010-10-01 2010-10-01 false What rules govern business size determinations? 26... DISADVANTAGED BUSINESS ENTERPRISES IN DEPARTMENT OF TRANSPORTATION FINANCIAL ASSISTANCE PROGRAMS Certification...

  9. Study of phosphors determination in biological samples

    International Nuclear Information System (INIS)

    Oliveira, Rosangela Magda de.

    1994-01-01

    In this paper, phosphors determination by neutron activation analysis in milk and bone samples was studied employing both instrumental and radiochemical separation methods. The analysis with radiochemistry separation consisted of the simultaneous irradiation of the samples and standards during 30 minutes, dissolution of the samples, hold back carrier, addition precipitation of phosphorus with ammonium phosphomolibdate (A.M.P.) and phosphorus-32 by counting by using Geiger-Mueller detector. The instrumental analysis consisted of the simultaneous irradiation of the samples and standards during 30 minutes, transfer of the samples into a counting planchet and measurement of the beta radiation emitted by phosphorus-32, after a suitable decay period. After the phosphorus analysis methods were established they were applied to both commercial milk and animal bone samples, and data obtained in the instrumental and radiochemical separation methods for each sample, were compared between themselves. In this work, it became possible to obtain analysis methods for phosphorus that can be applied independently of the sample quantity available, and the phosphorus content in the samples or interference that can be present in them. (author). 51 refs., 7 figs., 4 tabs

  10. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  11. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  12. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  13. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  14. Determination of the lateral size and thickness of solution-processed graphene flakes

    Science.gov (United States)

    Lin, Li-Shang; Bin-Tay, Wei; Aslam, Zabeada; Westwood, A. V. K.; Brydson, R.

    2017-09-01

    We present a method to determine the lateral size distribution of solution…processed graphene via direct image analysis techniques. Initially transmission electron microscopy (TEM) and optical microscopy (OM) were correlated and used to provide a reliable benchmark. A rapid, automated OM method was then developed to obtain the distribution from thousands of flakes, avoiding statistical uncertainties and providing high accuracy. Dynamic light scattering (DLS) was further employed to develop an in-situ method to derive the number particle size distribution (PSD) for a dispersion, with a deviation lower than 22% in the sub-micron regime. Methods for determining flake thickness are also discussed.

  15. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  16. Optimised method for the routine determination of Technetium-99 in environmental samples by liquid scintillation counting

    International Nuclear Information System (INIS)

    Wigley, F.; Warwick, P.E.; Croudace, I.W.; Caborn, J.; Sanchez, A.L.

    1999-01-01

    A method has been developed for the routine determination of 99 Tc in a range of environmental matrices using 99m Tc (t 1/2 =6.06 h) as an internal yield monitor. Samples are ignited stepwise to 550C and the 99 Tc is extracted from the ignited residue with 8 M nitric acid. Many contaminants are co-precipitated with Fe(OH) 3 and the Tc in the supernatant is pre-concentrated and further purified using anion exchange chromatography. Final separation of Tc from Ru is achieved by extraction of Tc into 5% tri-n-octylamine in xylene from 2 M sulphuric acid. The xylene fraction is mixed directly with a commercial liquid scintillant cocktail. The chemical yield is determined through the measurement of 99m Tc by gamma spectrometry and the 99 Tc activity is measured using liquid scintillation counting after a further two weeks to allow decay of the 99m Tc activity. Typical recoveries for this method are in the order 70-95%. The method has a detection limit of 1.7 Bq kg -1 based on a 2 h count time and a 10 g sample size. The chemical separation for 24 samples of sediment or marine biota can be completed by one analyst in a working week. A further week is required to allow the samples to decay before determination. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  17. [Particle size determination by radioisotope x-ray absorptiometry with sedimentation method].

    Science.gov (United States)

    Matsui, Y; Furuta, T; Miyagawa, S

    1976-09-01

    The possibility of radioisotope X-ray absorptiometry to determine the particle size of powder in conjunction with sedimentation was investigated. The experimental accuracy was primarily determined by Cow and X-ray intensity. where Co'=weight concentration of the particle in the suspension w'=(micron/rho)l/(mu/rho)s-rhol/rhos rho; density micron/rho; mass absorption coefficient, suffix l and s indicate dispersion and particle, respectively. The radiosiotopes, Fe-55, Pu-238 and Cd-109 have high w-values over the wide range of the atomic number. However, a source of high micron value such as Fe-55 is not suitable because the optimal X-ray transmission length, Lopt is decided by the expression, micronlLopt approximately 2/(1+C'ow') by using Cd-109 AgKX-ray source, the weight size distribution of particles from the heavy elements such as PbO2 to light elements such as Al2O3 or flyash was determined.

  18. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  19. Determinants of Profitability of Food Industry in India: A Size-Wise Analysis

    Directory of Open Access Journals (Sweden)

    Ramachandran Azhagaiah

    2012-01-01

    Full Text Available Profitability is the profit earning capacity, which is a crucial factorin contributing to the survival of firms. This paper is a maidenattempt at estimating the impact of size on profitability, consideringthe ‘size’ as the control variable. For this purpose, the selectedfirms are classified into three size categories as ‘small,’ ‘medium,’and ‘large’ based on the sales turnover. The results show that volatilityand growth are the major predictors in determining profitabilityin case of small size firms while growth is important in determiningthe profitability of medium size firms. Capital intensityhas a significant positive coefficient with the profitability of largesize firms. The overall result shows that the larger the size of thefirm, the more the investment in long lived assets has helped toincrease the profitability of the firm unlike the trend in cases ofsmall size and medium size firms.

  20. Optimal sample preparation for nanoparticle metrology (statistical size measurements) using atomic force microscopy

    International Nuclear Information System (INIS)

    Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.

    2010-01-01

    Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.

  1. System to determine present elements in oily samples

    International Nuclear Information System (INIS)

    Mendoza G, Y.

    2004-11-01

    In the Chemistry Department of the National Institute of Nuclear Investigations of Mexico, dedicated to analyze samples of oleaginous material and of another origin, to determine the elements of the periodic table present in the samples, through the Neutron activation analysis technique (NAA). This technique has been developed to determine majority elements in any solid, aqueous, industrial and environmental sample, which consists basically on to irradiate a sample with neutrons coming from the TRIGA Mark III reactor and to carry out the analysis to obtain those gamma spectra that it emits, for finally to process the information, the quantification of the analysis it is carried out in a manual way, which requires to carry out a great quantity of calculations. The main objective of this project is the development of a software that allows to carry out the quantitative analysis of the NAA for the multielemental determination of samples in an automatic way. To fulfill the objective of this project it has been divided in four chapters: In the first chapter it is shortly presented the history on radioactivity and basic concepts that will allow us penetrate better to this work. In the second chapter the NAA is explained which is used in the sample analysis, the description of the process to be carried out, its are mentioned the characteristics of the used devices and an example of the process is illustrated. In the third chapter it is described the development of the algorithm and the selection of the programming language. The fourth chapter it is shown the structure of the system, the general form of operation, the execution of processes and the obtention of results. Later on the launched results are presented in the development of the present project. (Author)

  2. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  3. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  4. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Lipoplex size is a major determinant of in vitro lipofection efficiency.

    Science.gov (United States)

    Ross, P C; Hui, S W

    1999-04-01

    The inhibition effect of serum on the transfection efficiency of cationic liposome-DNA complexes (lipoplexes) is a major obstacle to the application of this gene delivery vector both in vitro and in vivo. The size of the lipoplexes, as they are presented to targeted cells, is found to be the major determinant of their effectiveness in transfection. The transfection efficiency and the cell association and uptake of lipoplexes with CHO cells was found to increase with increasing lipoplex size. The influence on the transfection efficiency of lipoplexes by their cationic lipid:DNA ratios, types of liposomes, incubation time in polyanion containing media, and time of serum addition, are mediated mainly through size. Lipoplexes at a 2:1 charge ratio grow in size in media containing polyanions. The size growth may be arrested by adding serum to the incubation media. By using large lipoplexes, especially those made from multilamellar vesicles, the serum inhibition effect may be overcome.

  6. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    Science.gov (United States)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  7. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  8. Testing of Small Graphite Samples for Nuclear Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Julie Chapman

    2010-11-01

    Accurately determining the mechanical properties of small irradiated samples is crucial to predicting the behavior of the overal irradiated graphite components within a Very High Temperature Reactor. The sample size allowed in a material test reactor, however, is limited, and this poses some difficulties with respect to mechanical testing. In the case of graphite with a larger grain size, a small sample may exhibit characteristics not representative of the bulk material, leading to inaccuracies in the data. A study to determine a potential size effect on the tensile strength was pursued under the Next Generation Nuclear Plant program. It focuses first on optimizing the tensile testing procedure identified in the American Society for Testing and Materials (ASTM) Standard C 781-08. Once the testing procedure was verified, a size effect was assessed by gradually reducing the diameter of the specimens. By monitoring the material response, a size effect was successfully identified.

  9. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  10. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  11. Determination of uranium in industrial and environmental samples. Vol. 4

    Energy Technology Data Exchange (ETDEWEB)

    El-Sweify, F H; Shehata, M K; Metwally, E M; El-Shazly, E A.A.; El-Naggar, H A [Nuclear Chemistry Department, Hot Laborities Center, Atomic Energy Authority, Cairo (Egypt)

    1996-03-01

    The phosphate ores used at `Abu zaabal fertilizer and chemical company` for the production of some chemicals and fertilizers contain detectable amounts of uranium. In this study, the content of uranium in samples of different products of fertilizers, gypsum, and phosphate ore were determined using NAA, and gamma ray spectroscopy of the irradiated samples. Another method based on measuring the natural radioactivity of {sup 238} U series for non-irradiated samples using gamma-ray spectroscopy was also used for determine uranium content in the samples. In the NAA method, the content of U(ppm) in the samples was been computed from the photopeak activity of the lines = 106.1, 228.2, and 277.5 KeV of {sup 239} Np induced in the irradiated samples, and the uranium standard simultaneously irradiated. the gamma-ray spectra, and the decay curves are given. In the second method the gamma-ray spectra of the natural radioactivity of the samples and uranium standard were measured. The gamma-transition of energies 295.1, 251.9 KeV for {sup 214} Pb; 609.3, 768.4, 1120.3, 1238.1 KeV for {sup 214} Bi were determined. The uranium {sup 23U} traces in drainage water was also determined spectrophotometrically using arsenazo-III after preconcentration of uranium from the pretreated drainage water in column packed with chelex-100 resin. The recovery was found to be 90 {+-} 5%. 11 figs., 3 tabs.

  12. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  13. Determinants of the Size of Public Expenditure in Nigeria

    Directory of Open Access Journals (Sweden)

    Ezebuilo Romanus Ukwueze

    2015-12-01

    Full Text Available Analysis of public expenditure constitutes a central issue in public sector economics and public finance literature. Understanding the reasons for government spending growth has been a central concern of public sector economists. This is due to the fact that most economies of the world have consistently had increased government expenditures. Nigeria is not an exception. There is need to ascertain the determinants of size of government expenditure in Nigeria. Short-Run Error Correction Model and long-run static equation were used for comparing the influence of those variables on the size of government spending. The long-run static equation served as a test to compare short-run dynamics with the long-run relationships. Ordinary least squares (OLS estimation technique was used. The stationarity tests showed that none of the variables was stationary at level form, but only after their first difference. The results of this study show that the size of revenue and growth rate of national income (output and private investment significantly influence the size of public expenditure both in the short run and long run. External and domestic debts significantly influence the size of government expenditure only in the short run. It is recommended that the revenue base should be expanded; conducive environment should be created for private investment to thrive, and debt accumulation should be reduced and used for stabilization only in the short run. The conclusion to draw from this study is that revenue, private investment, and income boost public spending while public debts might be counterproductive.

  14. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  15. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  16. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  17. Method for Determination of Neptunium in Large-Sized Urine Samples Using Manganese Dioxide Coprecipitation and 242Pu as Yield Tracer

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2013-01-01

    A novel method for bioassay of large volumes of human urine samples using manganese dioxide coprecipitation for preconcentration was developed for rapid determination of 237Np. 242Pu was utilized as a nonisotopic tracer to monitor the chemical yield of 237Np. A sequential injection extraction chr...... and rapid analysis of neptunium contamination level for emergency preparedness....

  18. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  19. Quality Control Samples for the Radiological Determination of Tritium in Urine Samples

    International Nuclear Information System (INIS)

    Ost'pezuk, P.; Froning, M.; Laumen, S.; Richert, I.; Hill, P.

    2004-01-01

    The radioactive decay product of tritium is a low energy beta that cannot penetrate the outer dead layer of human skin. Therefore , the main hazard associated with tritium is internal exposure. In addition, due to the relatively long half life and short biological half life, tritium must be ingested in large amounts to pose a significant health risk. On the other hand, the internal exposure should be kept as low as practical. For incorporation monitoring of professional radiation workers the quality control is of utmost importance. In the Research Centre Juelich GmbH (FZJ) a considerable fraction of monitoring by excretion analysis relates to the isotope Tritium. Usually an aliquot of an urine sample is mixed with a liquid scintillator and measured in a liquid scintillation counter. Quality control samples in the form of three kind of internal reference samples (blank, reference samples with low activity and reference sample with elevated activity) were prepared from a mixed, Tritium (free) urine samples. 1 ml of these samples were pipetted into a liquid scintillation vial. In the part of theses vials a known amounts of Tritium were added. All these samples were stored at 20 degrees. Based on long term use of all these reference samples it was possible to construct appropriate control charts with the upper and lower alarm limits. Daily use of these reference samples decrease significantly the risk for false results in original urine with no significant increase of the determination time. (Author) 2 refs

  20. Determination of size-specific exposure settings in dental cone-beam CT

    International Nuclear Information System (INIS)

    Pauwels, Ruben; Jacobs, Reinhilde; Bogaerts, Ria; Bosmans, Hilde; Panmekiate, Soontra

    2017-01-01

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  1. Determination of size-specific exposure settings in dental cone-beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Pauwels, Ruben [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand); University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Jacobs, Reinhilde [University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Bogaerts, Ria [University of Leuven, Laboratory of Experimental Radiotherapy, Department of Oncology, Biomedical Sciences Group, Leuven (Belgium); Bosmans, Hilde [University of Leuven, Medical Physics and Quality Assessment, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Panmekiate, Soontra [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand)

    2017-01-15

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  2. Determination of radium-226 in environmental samples

    International Nuclear Information System (INIS)

    Powers, R.P.; Turnage, N.E.; Kanipe, L.G.

    1980-01-01

    The analysis of soil and water samples for 226 Ra by gamma spectrometry with a Ge(Li) detector was compared with that by radiochemical separation followed by 222 Rn de-emanation. Lower limits of detection (LLD) for 226 Ra were calculated for the two analytical techniques. The Ge(Li) system was found to have an LLD for soil comparable to that calculated for the de-emanation procedure, but the Ge(Li) system was found to have a significantly higher LLD for water samples. Cost analysis indicated that the cost of 222 Ra determination with a Ge(Li) system can be less than with the de-emanation procedure if the Ge(Li) system can perform at least one other isotopic anaysis per sample

  3. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  4. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  5. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  6. A review of sample preparation and its influence on pH determination in concrete samples

    International Nuclear Information System (INIS)

    Manso, S.; Aguado, A.

    2017-01-01

    If we are to monitor the chemical processes in cementitious materials, then pH assays in the pore solutions of cement pastes, mortars, and concretes are of key importance. However, there is no standard method that regulates the sample-preparation method for pH determination. The state-of-the-art of different methods for pH determination in cementitious materials is presented in this paper and the influence of sample preparation in each case. Moreover, an experimental campaign compares three different techniques for pH determination. Its results contribute to establishing a basic criterion to help researchers select the most suitable method, depending on the purpose of the research. A simple tool is described for selecting the easiest and the most economic pH determination method, depending on the objective; especially for researchers and those with limited experience in this field.

  7. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  8. Determinants of family size in a Gulf Arab state: a comparison between two areas.

    Science.gov (United States)

    Hamadeh, Randah R; Al-Roomi, Khaldoon; Masuadi, Emad

    2008-09-01

    The rapid economic transition in the Gulf Arab countries has resulted in marked changes in fertility and marriage patterns and a decrease in the number of children per family. Yet little is known about the determinants of family size in urban and less urban areas. A cross-sectional study was carried out on 450 Kuwaiti women aged 20-60 years who attended health care centres in Al Asima and Al Jahra governorates. A semi-structured questionnaire was administered through face-to-face interview which included variables on socio-demographic characteristics, family size, actual and ideal spacing, marriage related variables, health conditions and utilization of health services. Both univariate and multivariate analyses were performed to identify the factors that affect family size. The socio-economic indicators were significantly better in Al Asima, the capital, than in Al Jahra, a less urbanized area. On average, family size for the total sample was 5.97 +/- 0.114 with a larger size (6.27 +/- 0.242) in Al Jahra than in Al Asima (5.80 +/- 0.118) but without a significant difference. Al Jahra women reported a larger number of deliveries and past pregnancies but a lower usage of contraceptive measures. The total fertility rate was 3.65 in Al Asima, 3.84 in Al Jahra and 3.71 births per woman in the total population. Family size was inversely related to the educational level of women and their husbands. Currently employed women had a smaller family size (5.22 +/- 0.119) than the unemployed (6.81 +/- 0.187); p Families where the husband was the decision-maker on the number of children had a significantly larger family size (6.91 +/- 0.451) than families where the couple both participated in the decision (5.83 +/- 0.129; p = 0.032). The duration of marriage, ideal number of children, age of women at last delivery, number of rooms and the crowding index had significant positive effects on family size, whereas age at first delivery, duration between two consecutive pregnancies and

  9. In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size

    Directory of Open Access Journals (Sweden)

    Stefano Schiavon

    2010-01-01

    Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.

  10. Determination of mercury in biologycal samples by radiochemical neutron activation analysis

    International Nuclear Information System (INIS)

    Suc, N.V.

    1989-01-01

    The radiochemical neutron activation analysis was applied to determine contents of mercury in biological samples. Samples were digested in mixing of H 2 SO 4 and HNO 3 acid. After extraction of mercury by Ni-Ditiodietylphosphoric acid in carbontetrachloride, mercury was back extracted by 5% KI solution. Contents of mercury from five samples of fish was determined by this method. The accuracy of the method was checked by comparing it with NBS standard samples and results are good agreement

  11. Algorithm of Data Reduce in Determination of Aerosol Particle Size Distribution at Damps/C

    International Nuclear Information System (INIS)

    Muhammad-Priyatna; Otto-Pribadi-Ruslanto

    2001-01-01

    The analysis had to do for algorithm of data reduction on Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system, this is for determine aerosol particle size distribution with range 0,01 μm to 1 μm in diameter. Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system contents are software and hardware. The hardware used determine of mobilities of aerosol particle and so the software used determine aerosol particle size distribution in diameter. The mobilities and diameter particle had connection in the electricity field. That is basic program for reduction of data and particle size conversion from particle mobility become particle diameter. The analysis to get transfer function value, Ω, is 0.5. The data reduction program to do conversation mobility basis become diameter basis with number efficiency correction, transfer function value, and poly charge particle. (author)

  12. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  13. De determination for young samples using the standardised OSL response of coarse-grain quartz

    International Nuclear Information System (INIS)

    Burbidge, C.I.; Duller, G.A.T.; Roberts, H.M.

    2006-01-01

    It has recently been shown that it is possible to construct standardised curves of the sensitivity corrected growth in optically stimulated luminescence (OSL) with exposure to ionising radiation, and that they may be used in the dating of quartz and polymineral samples. Standardised growth curves are particularly advantageous where measurement time is limited, as once they have been defined, only the natural signal and the response to a subsequent test dose are required in order to determine the equivalent dose of a sub-sample. The present study is concerned with the application of the standardised growth curve approach to OSL dating of Holocene age samples. Systematic changes in the shape of the standardised growth curve of coarse-grain quartz are identified as the size of the test dose is varied, because of non-proportionality between the test dose and the luminescence test response. The effect is characterised by fitting the change in gradient of the standardised growth curve as test dose is varied. An equation is defined to describe standardised growth as a function of regenerative dose and test dose. Regenerative dose responses of other samples in this study are treated as unknowns and recovered through different growth curves to compare precision and accuracy of various methods of D e determination. The standardised growth curve is found to yield similar precision to conventional fits of single aliquot regenerative data, but slightly poorer accuracy. The standardised growth curve approach was refined by incorporating the measurement of one regenerative response for each aliquot as well as its natural signal. Measurements of this additional data point for aliquots of 22 samples were used to adjust the standardised growth equation, improving its accuracy. The incorporation of this additional data point also indicated a systematic uncertainty of 2.4% in the estimates of D e

  14. Determination of sampling constants in NBS geochemical standard reference materials

    International Nuclear Information System (INIS)

    Filby, R.H.; Bragg, A.E.; Grimm, C.A.

    1986-01-01

    Recently Filby et al. showed that, for several elements, National Bureau of Standards (NBS) Fly Ash standard reference material (SRM) 1633a was a suitable reference material for microanalysis (sample weights 2 , and the mean sample weight, W vector, K/sub s/ = (S/sub s/%) 2 W vector, could not be determined from these data because it was not possible to quantitate other sources of error in the experimental variances. K/sub s/ values for certified elements in geochemical SRMs provide important homogeneity information for microanalysis. For mineralogically homogeneous SRMs (i.e., small K/sub s/ values for associated elements) such as the proposed clays, it is necessary to determine K/sub s/ by analysis of very small sample aliquots to maximize the subsampling variance relative to other sources of error. This source of error and the blank correction for the sample container can be eliminated by determining K/sub s/ from radionuclide activities of weighed subsamples of a preirradiated SRM

  15. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  16. Determination of strontium-90 in soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Chang, C C

    1976-06-01

    The determination of /sup 90/Sr in soil by tri-n-butyl phosphate (TBP) is often interfered with iron which is always present in soil sample. Based on the method given by the U.S. Environmental Protection Agency, HClO/sub 4/ is added to remove iron ions while the soil sample is analyzed with TBP. The effect of different concentrations of HClO/sub 4/ on extraction yield of iron and chemical yield of yttrium is investigated. The experimental results show that 2N HClO/sub 4/ is the optimum concentration. The chemical yield of yttrium can reach about 60 percent, and all iron ions can be removed. This method has successfully been applied to analyze the soil samples taken from the site of the nuclear power plant in North Taiwan.

  17. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  18. Habitat structure and body size distributions: Cross-ecosystem comparison for taxa with determinate and indeterminate growth

    Science.gov (United States)

    Nash, Kirsty L.; Allen, Craig R.; Barichievy, Chris; Nystrom, Magnus; Sundstrom, Shana M.; Graham, Nicholas A.J.

    2014-01-01

    Habitat structure across multiple spatial and temporal scales has been proposed as a key driver of body size distributions for associated communities. Thus, understanding the relationship between habitat and body size is fundamental to developing predictions regarding the influence of habitat change on animal communities. Much of the work assessing the relationship between habitat structure and body size distributions has focused on terrestrial taxa with determinate growth, and has primarily analysed discontinuities (gaps) in the distribution of species mean sizes (species size relationships or SSRs). The suitability of this approach for taxa with indeterminate growth has yet to be determined. We provide a cross-ecosystem comparison of bird (determinate growth) and fish (indeterminate growth) body mass distributions using four independent data sets. We evaluate three size distribution indices: SSRs, species size–density relationships (SSDRs) and individual size–density relationships (ISDRs), and two types of analysis: looking for either discontinuities or abundance patterns and multi-modality in the distributions. To assess the respective suitability of these three indices and two analytical approaches for understanding habitat–size relationships in different ecosystems, we compare their ability to differentiate bird or fish communities found within contrasting habitat conditions. All three indices of body size distribution are useful for examining the relationship between cross-scale patterns of habitat structure and size for species with determinate growth, such as birds. In contrast, for species with indeterminate growth such as fish, the relationship between habitat structure and body size may be masked when using mean summary metrics, and thus individual-level data (ISDRs) are more useful. Furthermore, ISDRs, which have traditionally been used to study aquatic systems, present a potentially useful common currency for comparing body size distributions

  19. Determining the number of samples required for decisions concerning remedial actions at hazardous waste sites

    International Nuclear Information System (INIS)

    Skiles, J.L.; Redfearn, A.; White, R.K.

    1991-01-01

    The processing of collecting, analyzing, and assessing the data needed to make to make decisions concerning the cleanup of hazardous waste sites is quite complex and often very expensive. This is due to the many elements that must be considered during remedial investigations. The decision maker must have sufficient data to determine the potential risks to human health and the environment and to verify compliance with regulatory requirements, given the availability of resources allocated for a site, and time constraints specified for the completion of the decision making process. It is desirable to simplify the remedial investigation procedure as much as possible to conserve both time and resources while, simultaneously, minimizing the probability of error associated with each decision to be made. With this in mind, it is necessary to have a practical and statistically valid technique for estimating the number of on-site samples required to ''guarantee'' that the correct decisions are made with a specified precision and confidence level. Here, we will examine existing methodologies and then develop our own approach for determining a statistically defensible sample size based on specific guidelines that have been established for the risk assessment process

  20. A review of sample preparation and its influence on pH determination in concrete samples

    Directory of Open Access Journals (Sweden)

    S. Manso

    2017-01-01

    Full Text Available If we are to monitor the chemical processes in cementitious materials, then pH assays in the pore solutions of cement pastes, mortars, and concretes are of key importance. However, there is no standard method that regulates the sample-preparation method for pH determination. The state-of-the-art of different methods for pH determination in cementitious materials is presented in this paper and the influence of sample preparation in each case. Moreover, an experimental campaign compares three different techniques for pH determination. Its results contribute to establishing a basic criterion to help researchers select the most suitable method, depending on the purpose of the research. A simple tool is described for selecting the easiest and the most economic pH determination method, depending on the objective; especially for researchers and those with limited experience in this field.

  1. Flaw-size measurement in a weld samples by ultrasonic frequency analysis

    International Nuclear Information System (INIS)

    Adler, L.; Cook, K.V.; Whaley, H.L. Jr.; McClung, R.W.

    1975-01-01

    An ultrasonic frequency-analysis technique was developed and applies to characterize flaws in an 8-in. (203-mm) thick heavy-section steel weld specimen. The technique applies a multitransducer system. The spectrum of the received broad-band signal is frequency analyzed at two different receivers for each of the flaws. From the two spectra, the size and orientation of the flaw are determined by the use of an analytic model proposed earlier. (auth)

  2. A study on aluminum determination in environmental samples by Neutron Activation Analysis

    International Nuclear Information System (INIS)

    Noyori, Amanda

    2017-01-01

    Aluminum determinations are of great interest since this element is toxic to humans and it is widely distributed in the environment. Besides, the determinations of this element by conventional analytical methods present difficulties due to sample contamination during the analyses. Neutron activation analysis (NAA) for Al determination presents advantages of fast analyses and of high sensitivity. However, NAA of Al does present problems of P and Si nuclear reaction interferences. Aluminum is determined by measuring 28 Al, formed in the reaction 27 Al (n, γ) 28 Al, the same radioisotope formed in reactions 31 P (n, α) 28 Al and 28 Si (n, p) 28 Al. The purpose of this study was to determine Al in environmental samples by NAA correcting these interferences using correction factors, and determining P and Si concentrations in the samples. In this study, certified reference materials and biomonitor samples (tree barks and lichen) were analyzed. Experimental procedure consisted of irradiating an aliquot of the sample at the IEA-R1 nuclear research reactor together with Al standard, followed by gamma ray spectrometry. Phosphorus was determined by measuring beta radiation of 32 P using a Geiger-Müller counter. Silicon was determined by epithermal neutron activation analysis and measuring 29 Al formed in the reaction 29 Si (n, p) 29 Al. Results obtained in the determination of Al, P and Si in the certified reference materials showed good precision and accuracy with |Z-score| ≤ 2. Aluminum results in the biomonitor samples varied from to 253 to 15783 μg g -1 . In the case of P its concentrations varied from 283 to 1946 μg g -1 . Silicon determinations in biomonitors varied from 0.11 to 7.8 %. The interference contribution rates in the analyses of the biomonitor samples were of the order of 2.0 % and this contribution depends on the relation between concentrations of interfering elements and of Al in the sample. Detection limit values of Al in the biomonitor analyses

  3. Determination of technetium-99 in environmental samples: A review

    International Nuclear Information System (INIS)

    Shi Keliang; Hou Xiaolin; Roos, Per; Wu Wangsuo

    2012-01-01

    Highlights: ► The source term, physicochemical properties, environmental distribution and behaviour of 99 Tc are presented. ► Various sample pre-treatment and pre-concentration techniques of technetium are discussed. ► Chemical separation and purification techniques for 99 Tc in environmental samples are reviewed. ► Measurement techniques for 99 Tc in environmental level and automated analytical methods are reviewed. ► The reported analytical methods of 99 Tc are critically compared to provide overall information. - Abstract: Due to the lack of a stable technetium isotope, and the high mobility and long half-life, 99 Tc is considered to be one of the most important radionuclides in safety assessment of environmental radioactivity as well as nuclear waste management. 99 Tc is also an important tracer for oceanographic research due to the high technetium solubility in seawater as TcO 4 − . A number of analytical methods, using chemical separation combined with radiometric and mass spectrometric measurement techniques, have been developed over the past decades for determination of 99 Tc in different environmental samples. This article summarizes and compares recently reported chemical separation procedures and measurement methods for determination of 99 Tc. Due to the extremely low concentration of 99 Tc in environmental samples, the sample preparation, pre-concentration, chemical separation and purification for removal of the interferences for detection of 99 Tc are the most important issues governing the accurate determination of 99 Tc. These aspects are discussed in detail in this article. Meanwhile, the different measurement techniques for 99 Tc are also compared with respect to advantages and drawbacks. Novel automated analytical methods for rapid determination of 99 Tc using solid extraction or ion exchange chromatography for separation of 99 Tc, employing flow injection or sequential injection approaches are also discussed.

  4. Functional size of photosynthetic electron transport chain determined by radiation inactivation

    International Nuclear Information System (INIS)

    Pan, R.S.; Chen, L.F.; Wang, M.Y.; Tsal, M.Y.; Pan, R.L.; Hsu, B.D.

    1987-01-01

    Radiation inactivation technique was employed to determine the functional size of photosynthetic electron transport chain of spinach chloroplasts. The functional size for photosystem I+II(H 2 O to methylviologen) was 623 +/- 37 kilodaltons; for photosystem II (H 2 O to dimethylquinone/ferricyanide), 174 +/- 11 kilodaltons; and for photosystem I (reduced diaminodurene to methylviologen), 190 +/- 11 kilodaltons. The difference between 364 +/- 22 (the sum of 174 +/- 11 and 190 +/- 11) kilodaltons and 623 +/- 37 kilodaltons is partially explained to be due to the presence of two molecules of cytochrome b 6 /f complex of 280 kilodaltons. The molecular mass for other partial reactions of photosynthetic electron flow, also measured by radiation inactivation, is reported. The molecular mass obtained by this technique is compared with that determined by other conventional biochemical methods. A working hypothesis for the composition, stoichiometry, and organization of polypeptides for photosynthetic electron transport chain is proposed

  5. Determination of platinum in biological samples by NAA

    International Nuclear Information System (INIS)

    Okada, Yukiko; Hirai, Shoji; Sakurai, Hiromu; Haraguchi, Hiroki.

    1990-01-01

    Recently, a Pt compound, Cisplatin (cis-dichlorodiamine platinum) has been used therapeutically as an effective anti-malignant-cancer drug. However, since this drug has a harmful aftereffect on kidney, an urgent study of how to reduce its toxicity without influencing the therapeutic effect is needed. We have to understand the behavior of Pt in biological organs in order to elucidate the mechanism of its toxicity reduction. In this study, the analytical conditions for the determination of Pt in biological samples by neutron activation analysis, such as cooling time, counting time and sample weight, are optimized. Freeze-dried samples of the liver, kidney and whole blood of a rat treated with Cisplatin were prepared to evaluate the precision of the analysis and the lower limit of determination. 199 Au (t 1/2 = 3.15 d) produced from 199 Pt (n, γ, β - ) was selected as the analytical radionuclide. A concentration of ca. 1 ppm Pt was determinable under the optimal conditions: a cooling time of 5 d and a counting time of 1 h. Pt in the respective organs of the control rat was not detected under the same analytical conditions. The concentrations of Pt in the liver, kidney, spleen, pancreas and lung of a rat treated with both Cisplatin and sodium selenite were higher than those of a rat treated only with Cisplatin. (author)

  6. What Makes Jessica Rabbit Sexy? Contrasting Roles of Waist and Hip Size

    Directory of Open Access Journals (Sweden)

    William D. Lassek

    2016-04-01

    Full Text Available While waist/hip ratio (WHR and body mass index (BMI have been the most studied putative determinants of female bodily attractiveness, BMI is not directly observable, and few studies have considered the independent roles of waist and hip size. The range of attractiveness in many studies is also quite limited, with none of the stimuli rated as highly attractive. To explore the relationships of these anthropometric parameters with attractiveness across a much broader spectrum of attractiveness, we employ three quite different samples: a large sample of college women, a larger sample of Playboy Playmates of the Month than that has been previously examined, and a large pool of imaginary women (e.g., cartoon, video game, graphic novel characters chosen as the “most attractive” by university students. Within-sample and between-sample comparisons agree in indicating that waist size is the key determinant of female bodily attractiveness and accounts for the relationship of both BMI and WHR with attractiveness, with between-sample effect sizes of 2.4–3.2. In contrast, hip size is much more similar across attractiveness groups and is unrelated to attractiveness when BMI or waist size is controlled.

  7. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  8. Carpel size, grain filling, and morphology determine individual grain weight in wheat

    OpenAIRE

    Xie, Quan; Mayes, Sean; Sparkes, Debbie L.

    2015-01-01

    Individual grain weight is a major yield component in wheat. To provide a comprehensive understanding of grain weight determination, the carpel size at anthesis, grain dry matter accumulation, grain water uptake and loss, grain morphological expansion, and final grain weight at different positions within spikelets were investigated in a recombinant inbred line mapping population of bread wheat (Triticum aestivum L.)?spelt (Triticum spelta L.). Carpel size, grain dry matter and water accumulat...

  9. Determination of size distribution function

    International Nuclear Information System (INIS)

    Teshome, A.; Spartakove, A.

    1987-05-01

    The theory of a method is outlined which gives the size distribution function (SDF) of a polydispersed system of non-interacting colloidal and microscopic spherical particles, having sizes in the range 0-10 -5 cm., from a gedanken experimental scheme. It is assumed that the SDF is differentiable and the result is obtained for rotational frequency in the order of 10 3 (sec) -1 . The method may be used independently, but is particularly useful in conjunction with an alternate method described in a preceding paper. (author). 8 refs, 2 figs

  10. The influence of tube voltage and phantom size in computed tomography on the dose-response relationship of dicentrics in human blood samples

    International Nuclear Information System (INIS)

    Jost, G; Pietsch, H; Lengsfeld, P; Voth, M; Schmid, E

    2010-01-01

    The aim of this study was to investigate the dose response relationship of dicentrics in human lymphocytes after CT scans at tube voltages of 80 and 140 kV. Blood samples from a healthy donor placed in tissue equivalent abdomen phantoms of standard, pediatric and adipose sizes were exposed at dose levels up to 0.1 Gy using a 64-slice CT scanner. It was found that both the tube voltage and the phantom size significantly influenced the CT scan-induced linear dose-response relationship of dicentrics in human lymphocytes. Using the same phantom (standard abdomen), 80 kV CT x-rays were biologically more effective than 140 kV CT x-rays. However, it could also be determined that the applied phantom size had much more influence on the biological effectiveness. Obviously, the increasing slopes of the CT scan-induced dose response relationships of dicentrics in human lymphocytes obtained in a pediatric, a standard and an adipose abdomen have been induced by scattering effects of photons, which strongly increase with increasing phantom size.

  11. The influence of tube voltage and phantom size in computed tomography on the dose-response relationship of dicentrics in human blood samples

    Energy Technology Data Exchange (ETDEWEB)

    Jost, G; Pietsch, H [TRG Diagnostic Imaging, Bayer Schering Pharma AG, Berlin (Germany); Lengsfeld, P; Voth, M [Global Medical Affairs Diagnostic Imaging, Bayer Schering Pharma AG, Berlin (Germany); Schmid, E, E-mail: Ernst.Schmid@lrz.uni-muenchen.d [Institute for Cell Biology, Center for Integrated Protein Science, University of Munich (Germany)

    2010-06-07

    The aim of this study was to investigate the dose response relationship of dicentrics in human lymphocytes after CT scans at tube voltages of 80 and 140 kV. Blood samples from a healthy donor placed in tissue equivalent abdomen phantoms of standard, pediatric and adipose sizes were exposed at dose levels up to 0.1 Gy using a 64-slice CT scanner. It was found that both the tube voltage and the phantom size significantly influenced the CT scan-induced linear dose-response relationship of dicentrics in human lymphocytes. Using the same phantom (standard abdomen), 80 kV CT x-rays were biologically more effective than 140 kV CT x-rays. However, it could also be determined that the applied phantom size had much more influence on the biological effectiveness. Obviously, the increasing slopes of the CT scan-induced dose response relationships of dicentrics in human lymphocytes obtained in a pediatric, a standard and an adipose abdomen have been induced by scattering effects of photons, which strongly increase with increasing phantom size.

  12. Automatic particle-size analysis of HTGR recycle fuel

    International Nuclear Information System (INIS)

    Mack, J.E.; Pechin, W.H.

    1977-09-01

    An automatic particle-size analyzer was designed, fabricated, tested, and put into operation measuring and counting HTGR recycle fuel particles. The particle-size analyzer can be used for particles in all stages of fabrication, from the loaded, uncarbonized weak acid resin up to fully-coated Biso or Triso particles. The device handles microspheres in the range of 300 to 1000 μm at rates up to 2000 per minute, measuring the diameter of each particle to determine the size distribution of the sample, and simultaneously determining the total number of particles. 10 figures

  13. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    Science.gov (United States)

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit

  14. Sample-size determination and adherence in randomised controlled ...

    African Journals Online (AJOL)

    Southern African Journal of Anaesthesia and Analgesia is co-published by NISC ... Care and Pain Management, University of KwaZulu-Natal, Pietermaritzburg, South Africa ..... sciatic nerve block on unplanned postoperative visits and readmissions .... brachial plexus block: single versus triple injection technique for upper.

  15. Using the modified sample entropy to detect determinism

    Energy Technology Data Exchange (ETDEWEB)

    Xie Hongbo, E-mail: xiehb@sjtu.or [Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Department of Biomedical Engineering, Jiangsu University, Zhenjiang (China); Guo Jingyi [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Zheng Yongping, E-mail: ypzheng@ieee.or [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Reseach Institute of Innovative Products and Technologies, Hong Kong Polytechnic University (Hong Kong)

    2010-08-23

    A modified sample entropy (mSampEn), based on the nonlinear continuous and convex function, has been proposed and proven to be superior to the standard sample entropy (SampEn) in several aspects. In this Letter, we empirically investigate the ability of the mSampEn statistic combined with surrogate data method to detect determinism. The effects of the datasets length and noise on the proposed method to differentiate between deterministic and stochastic dynamics are tested on several benchmark time series. The noise performance of the mSampEn statistic is also compared with the singular value decomposition (SVD) and symplectic geometry spectrum (SGS) based methods. The results indicate that the mSampEn statistic is a robust index for detecting determinism in short and noisy time series.

  16. Preconcentration NAA for simultaneous multielemental determination in water sample

    International Nuclear Information System (INIS)

    Chatt, A.

    1999-01-01

    Full text: Environment concerns with water, air, land and their interrelationship viz., human beings, fauna and flora. One of the important environmental compartments is water. Elements present in water might face a whole lot of physico-chemical conditions. This poses challenges to measure their total concentrations as well as different species. Preconcentration of the elements present in water samples is a necessary requisites in water analysis. For multi elements concentration measurements, Neutron Activation Analysis (NAA) is one of the preferred analytical techniques due to its sensitivity and selectivity. In this talk preconcentration NAA for multielemental determination in water sample determination will be discussed

  17. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  18. Rapid and Automated Determination of Plutonium and Neptunium in Environmental Samples

    DEFF Research Database (Denmark)

    Qiao, Jixin

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development...... and optimization for rapid determination of plutonium in environmental samples using SIextraction chromatography prior to inductively coupled plasma mass spectrometry (Paper III); (3) Development of an SI-chromatographic method for simultaneous determination of plutonium and neptunium in environmental samples...... for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including...

  19. External Determinants of the Development of Small and Medium-Sized Enterprises – Empirical Analysis

    Directory of Open Access Journals (Sweden)

    Renata Lisowska

    2015-01-01

    Full Text Available The paper aims to identify external determinants of the development of small and medium-sized enterprises and assess their impact on the functioning of these entities in Poland. Meeting this objective required: identifying determinants of the development of SMEs, determining the current development situation of the surveyed enterprises and examining the impact of external determinants on the development of SMEs. The implementation of the above-presented goals was based on the following assumptions: (i the current situation of the surveyed enterprises is determined with the use of quantitative indicators (turnover volume, number of employees, market share, profit levels (ii the analysis of external determinants encompasses three components of the environment: the macro-environment, the meso-environment and the micro-environment, (iii in each analysed area there are separate analyses conducted for micro, small and medium-sized enterprises, enabling greater precision in the identification of external determinants of development for each category of businesses.

  20. On-capillary sample cleanup method for the electrophoretic determination of carbohydrates in juice samples.

    Science.gov (United States)

    Morales-Cid, Gabriel; Simonet, Bartolomé M; Cárdenas, Soledad; Valcárcel, Miguel

    2007-05-01

    On many occasions, sample treatment is a critical step in electrophoretic analysis. As an alternative to batch procedures, in this work, a new strategy is presented with a view to develop an on-capillary sample cleanup method. This strategy is based on the partial filling of the capillary with carboxylated single-walled carbon nanotube (c-SWNT). The nanoparticles retain interferences from the matrix allowing the determination and quantification of carbohydrates (viz glucose, maltose and fructose). The precision of the method for the analysis of real samples ranged from 5.3 to 6.4%. The proposed method was compared with a method based on a batch filtration of the juice sample through diatomaceous earth and further electrophoretic determination. This method was also validated in this work. The RSD for this other method ranged from 5.1 to 6%. The results obtained by both methods were statistically comparable demonstrating the accuracy of the proposed methods and their effectiveness. Electrophoretic separation of carbohydrates was achieved using 200 mM borate solution as a buffer at pH 9.5 and applying 15 kV. During separation, the capillary temperature was kept constant at 40 degrees C. For the on-capillary cleanup method, a solution containing 50 mg/L of c-SWNTs prepared in 300 mM borate solution at pH 9.5 was introduced for 60 s into the capillary just before sample introduction. For the electrophoretic analysis of samples cleaned in batch with diatomaceous earth, it is also recommended to introduce into the capillary, just before the sample, a 300 mM borate solution as it enhances the sensitivity and electrophoretic resolution.

  1. Sample processing method for the determination of perchlorate in milk

    International Nuclear Information System (INIS)

    Dyke, Jason V.; Kirk, Andrea B.; Kalyani Martinelango, P.; Dasgupta, Purnendu K.

    2006-01-01

    In recent years, many different water sources and foods have been reported to contain perchlorate. Studies indicate that significant levels of perchlorate are present in both human and dairy milk. The determination of perchlorate in milk is particularly important due to its potential health impact on infants and children. As for many other biological samples, sample preparation is more time consuming than the analysis itself. The concurrent presence of large amounts of fats, proteins, carbohydrates, etc., demands some initial cleanup; otherwise the separation column lifetime and the limit of detection are both greatly compromised. Reported milk processing methods require the addition of chemicals such as ethanol, acetic acid or acetonitrile. Reagent addition is undesirable in trace analysis. We report here an essentially reagent-free sample preparation method for the determination of perchlorate in milk. Milk samples are spiked with isotopically labeled perchlorate and centrifuged to remove lipids. The resulting liquid is placed in a disposable centrifugal ultrafilter device with a molecular weight cutoff of 10 kDa, and centrifuged. Approximately 5-10 ml of clear liquid, ready for analysis, is obtained from a 20 ml milk sample. Both bovine and human milk samples have been successfully processed and analyzed by ion chromatography-mass spectrometry (IC-MS). Standard addition experiments show good recoveries. The repeatability of the analytical result for the same sample in multiple sample cleanup runs ranged from 3 to 6% R.S.D. This processing technique has also been successfully applied for the determination of iodide and thiocyanate in milk

  2. Sensor-triggered sampling to determine instantaneous airborne vapor exposure concentrations.

    Science.gov (United States)

    Smith, Philip A; Simmons, Michael K; Toone, Phillip

    2018-06-01

    It is difficult to measure transient airborne exposure peaks by means of integrated sampling for organic chemical vapors, even with very short-duration sampling. Selection of an appropriate time to measure an exposure peak through integrated sampling is problematic, and short-duration time-weighted average (TWA) values obtained with integrated sampling are not likely to accurately determine actual peak concentrations attained when concentrations fluctuate rapidly. Laboratory analysis for integrated exposure samples is preferred from a certainty standpoint over results derived in the field from a sensor, as a sensor user typically must overcome specificity issues and a number of potential interfering factors to obtain similarly reliable data. However, sensors are currently needed to measure intra-exposure period concentration variations (i.e., exposure peaks). In this article, the digitized signal from a photoionization detector (PID) sensor triggered collection of whole-air samples when toluene or trichloroethylene vapors attained pre-determined levels in a laboratory atmosphere generation system. Analysis by gas chromatography-mass spectrometry of whole-air samples (with both 37 and 80% relative humidity) collected using the triggering mechanism with rapidly increasing vapor concentrations showed good agreement with the triggering set point values. Whole-air samples (80% relative humidity) in canisters demonstrated acceptable 17-day storage recoveries, and acceptable precision and bias were obtained. The ability to determine exceedance of a ceiling or peak exposure standard by laboratory analysis of an instantaneously collected sample, and to simultaneously provide a calibration point to verify the correct operation of a sensor was demonstrated. This latter detail may increase the confidence in reliability of sensor data obtained across an entire exposure period.

  3. Visual accumulation tube for size analysis of sands

    Science.gov (United States)

    Colby, B.C.; Christensen, R.P.

    1956-01-01

    The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.

  4. Nanoparticle Analysis by Online Comprehensive Two-Dimensional Liquid Chromatography combining Hydrodynamic Chromatography and Size-Exclusion Chromatography with Intermediate Sample Transformation

    Science.gov (United States)

    2017-01-01

    Polymeric nanoparticles have become indispensable in modern society with a wide array of applications ranging from waterborne coatings to drug-carrier-delivery systems. While a large range of techniques exist to determine a multitude of properties of these particles, relating physicochemical properties of the particle to the chemical structure of the intrinsic polymers is still challenging. A novel, highly orthogonal separation system based on comprehensive two-dimensional liquid chromatography (LC × LC) has been developed. The system combines hydrodynamic chromatography (HDC) in the first-dimension to separate the particles based on their size, with ultrahigh-performance size-exclusion chromatography (SEC) in the second dimension to separate the constituting polymer molecules according to their hydrodynamic radius for each of 80 to 100 separated fractions. A chip-based mixer is incorporated to transform the sample by dissolving the separated nanoparticles from the first-dimension online in tetrahydrofuran. The polymer bands are then focused using stationary-phase-assisted modulation to enhance sensitivity, and the water from the first-dimension eluent is largely eliminated to allow interaction-free SEC. Using the developed system, the combined two-dimensional distribution of the particle-size and the molecular-size of a mixture of various polystyrene (PS) and polyacrylate (PACR) nanoparticles has been obtained within 60 min. PMID:28745485

  5. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  6. Determinants of Urban Poverty: The Case of Medium Sized City in Pakistan

    OpenAIRE

    Masood Sarwar Awan; Nasir Iqbal

    2010-01-01

    Urban poverty, which is distinct from rural poverty due to demographic, economic and political aspects remain hitherto unexplored, at the city level in Pakistan. We have examined the determinants of urban poverty in Sargodha, a medium-size city of Pakistan. The analysis is based on the survey of 330 households. Results suggest that employment in public sector, investment in human capital and access to public amenities reduce poverty while employment in informal sector, greater household size ...

  7. Determination of technetium-99 in environmental samples: A review

    DEFF Research Database (Denmark)

    Shi, Keliang; Hou, Xiaolin; Roos, Per

    2012-01-01

    Due to the lack of a stable technetium isotope, and the high mobility and long half-life, 99Tc is considered to be one of the most important radionuclides in safety assessment of environmental radioactivity as well as nuclear waste management. 99Tc is also an important tracer for oceanographic...... research due to the high technetium solubility in seawater as TcO4−. A number of analytical methods, using chemical separation combined with radiometric and mass spectrometric measurement techniques, have been developed over the past decades for determination of 99Tc in different environmental samples....... This article summarizes and compares recently reported chemical separation procedures and measurement methods for determination of 99Tc. Due to the extremely low concentration of 99Tc in environmental samples, the sample preparation, pre-concentration, chemical separation and purification for removal...

  8. Sonographic determination of normal spleen size in an adult African population

    Energy Technology Data Exchange (ETDEWEB)

    Mustapha, Zainab; Tahir, Abdulrahman [Department of Radiology, University of Maiduguri Teaching Hospital, Maiduguri, Borno State (Nigeria); Tukur, Maisaratu [Department of Human Physiology, University of Maiduguri, Maiduguri, Borno State (Nigeria); Bukar, Mohammed [Department of Obstetrics and Gynaecology, University of Maiduguri Teaching Hospital, Maiduguri, Borno State (Nigeria); Lee, Wai-Kit, E-mail: leewk33@hotmail.co [Department of Medical Imaging, St. Vincent' s Hospital, University of Melbourne, 41 Victoria Parade, Fitzroy, Victoria 3065 (Australia)

    2010-07-15

    Objective: The purpose of this study was to determine the normal range of spleen size in an adult African population, and compare the findings to published data to determine any correlation with ethnicity. Materials and methods: Three hundred and seventy-four African adults without conditions that can affect the spleen or splenic abnormalities were evaluated with ultrasonography. Spleen length, width and thickness were measured and spleen volume calculated. Spleen size was correlated with age, gender, height, weight, and body mass index. Results: The mean spleen volume was 120 cm{sup 3}. Spleen volume correlated with spleen width (r = 0.85), thickness (r = 0.83) and length (r = 0.80). Men had a larger mean spleen volume than women. No correlation was found between spleen volume and age, weight, height, or body mass index. Conclusion: Mean spleen volume in African adults is smaller than data from Western sources, and cannot be explained by difference in body habitus.

  9. Comparison of different methods for determining the size of a focal spot of microfocus X-ray tubes

    International Nuclear Information System (INIS)

    Salamon, M.; Hanke, R.; Krueger, P.; Sukowski, F.; Uhlmann, N.; Voland, V.

    2008-01-01

    The EN 12543-5 describes a method for determining the focal spot size of microfocus X-ray tubes up to a minimum spot size of 5 μm. The wide application of X-ray tubes with even smaller focal spot sizes in computed tomography and radioscopy applications requires the evaluation of existing methods for focal spot sizes below 5 μm. In addition, new methods and conditions for determining submicron focal spot sizes have to be developed. For the evaluation and extension of the present methods to smaller focal spot sizes, different procedures in comparison with the existing EN 12543-5 were analyzed and applied, and the results are presented

  10. Chefs' opinions of restaurant portion sizes.

    Science.gov (United States)

    Condrasky, Marge; Ledikwe, Jenny H; Flood, Julie E; Rolls, Barbara J

    2007-08-01

    The objectives were to determine who establishes restaurant portion sizes and factors that influence these decisions, and to examine chefs' opinions regarding portion size, nutrition information, and weight management. A survey was distributed to chefs to obtain information about who is responsible for determining restaurant portion sizes, factors influencing restaurant portion sizes, what food portion sizes are being served in restaurants, and chefs' opinions regarding nutrition information, health, and body weight. The final sample consisted of 300 chefs attending various culinary meetings. Executive chefs were identified as being primarily responsible for establishing portion sizes served in restaurants. Factors reported to have a strong influence on restaurant portion sizes included presentation of foods, food cost, and customer expectations. While 76% of chefs thought that they served "regular" portions, the actual portions of steak and pasta they reported serving were 2 to 4 times larger than serving sizes recommended by the U.S government. Chefs indicated that they believe that the amount of food served influences how much patrons consume and that large portions are a problem for weight control, but their opinions were mixed regarding whether it is the customer's responsibility to eat an appropriate amount when served a large portion of food. Portion size is a key determinant of energy intake, and the results from this study suggest that cultural norms and economic value strongly influence the determination of restaurant portion sizes. Strategies are needed to encourage chefs to provide and promote portions that are appropriate for customers' energy requirements.

  11. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  12. 13 CFR 121.303 - What size procedures are used by SBA before it makes a formal size determination?

    Science.gov (United States)

    2010-01-01

    ... source. (b) A small business investment company, a development company, a surety bond company, or a... area in which the headquarters of the applicant is located, regardless of the location of the parent company or affiliates. For disaster loan assistance, the request for a size determination must be made to...

  13. Phylogeny determines flower size-dependent sex allocation at flowering in a hermaphroditic family.

    Science.gov (United States)

    Teixido, A L; Guzmán, B; Staggemeier, V G; Valladares, F

    2017-11-01

    In animal-pollinated hermaphroditic plants, optimal floral allocation determines relative investment into sexes, which is ultimately dependent on flower size. Larger flowers disproportionally increase maleness whereas smaller and less rewarding flowers favour female function. Although floral traits are considered strongly conserved, phylogenetic relationships in the interspecific patterns of resource allocation to floral sex remain overlooked. We investigated these patterns in Cistaceae, a hermaphroditic family. We reconstructed phylogenetic relationships among Cistaceae species and quantified phylogenetic signal for flower size, dry mass and nutrient allocation to floral structures in 23 Mediterranean species using Blomberg's K-statistic. Lastly, phylogenetically-controlled correlational and regression analyses were applied to examine flower size-based allometry in resource allocation to floral structures. Sepals received the highest dry mass allocation, followed by petals, whereas sexual structures increased nutrient allocation. Flower size and resource allocation to floral structures, except for carpels, showed a strong phylogenetic signal. Larger-flowered species allometrically allocated more resources to maleness, by increasing allocation to corollas and stamens. Our results suggest a major role of phylogeny in determining interspecific changes in flower size and subsequent floral sex allocation. This implies that flower size balances the male-female function over the evolutionary history of Cistaceae. While allometric resource investment in maleness is inherited across species diversification, allocation to the female function seems a labile trait that varies among closely related species that have diversified into different ecological niches. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  14. Descriptions of sampling practices within five approaches to qualitative research in education and the health sciences

    OpenAIRE

    Guetterman, Timothy C.

    2015-01-01

    Although recommendations exist for determining qualitative sample sizes, the literature appears to contain few instances of research on the topic. Practical guidance is needed for determining sample sizes to conduct rigorous qualitative research, to develop proposals, and to budget resources. The purpose of this article is to describe qualitative sample size and sampling practices within published studies in education and the health sciences by research design: case study, ethnography, ground...

  15. Trace determination of uranium in fertilizer samples by total ...

    Indian Academy of Sciences (India)

    the fertilizers is important because it can be used as fuel in nuclear reactors and also because of en- vironmental concerns. ... The amounts of uranium in four fertilizer samples of Hungarian origin were determined by ... TXRF determination of uranium from phosphate fertilizers of Hungarian origin and the preliminary results ...

  16. Determination of hydraulic conductivity from grain-size distribution for different depositional environments

    KAUST Repository

    Rosas, Jorge

    2013-06-06

    Over 400 unlithified sediment samples were collected from four different depositional environments in global locations and the grain-size distribution, porosity, and hydraulic conductivity were measured using standard methods. The measured hydraulic conductivity values were then compared to values calculated using 20 different empirical equations (e.g., Hazen, Carman-Kozeny) commonly used to estimate hydraulic conductivity from grain-size distribution. It was found that most of the hydraulic conductivity values estimated from the empirical equations correlated very poorly to the measured hydraulic conductivity values with errors ranging to over 500%. To improve the empirical estimation methodology, the samples were grouped by depositional environment and subdivided into subgroups based on lithology and mud percentage. The empirical methods were then analyzed to assess which methods best estimated the measured values. Modifications of the empirical equations, including changes to special coefficients and addition of offsets, were made to produce modified equations that considerably improve the hydraulic conductivity estimates from grain size data for beach, dune, offshore marine, and river sediments. Estimated hydraulic conductivity errors were reduced to 6 to 7.1m/day for the beach subgroups, 3.4 to 7.1m/day for dune subgroups, and 2.2 to 11m/day for offshore sediments subgroups. Improvements were made for river environments, but still produced high errors between 13 and 23m/day. © 2013, National Ground Water Association.

  17. Determination of hydraulic conductivity from grain-size distribution for different depositional environments

    KAUST Repository

    Rosas, Jorge; Lopez Valencia, Oliver Miguel; Missimer, Thomas M.; Coulibaly, Kapo M.; Dehwah, Abdullah; Sesler, Kathryn; Rodri­ guez, Luis R. Lujan; Mantilla, David

    2013-01-01

    Over 400 unlithified sediment samples were collected from four different depositional environments in global locations and the grain-size distribution, porosity, and hydraulic conductivity were measured using standard methods. The measured hydraulic conductivity values were then compared to values calculated using 20 different empirical equations (e.g., Hazen, Carman-Kozeny) commonly used to estimate hydraulic conductivity from grain-size distribution. It was found that most of the hydraulic conductivity values estimated from the empirical equations correlated very poorly to the measured hydraulic conductivity values with errors ranging to over 500%. To improve the empirical estimation methodology, the samples were grouped by depositional environment and subdivided into subgroups based on lithology and mud percentage. The empirical methods were then analyzed to assess which methods best estimated the measured values. Modifications of the empirical equations, including changes to special coefficients and addition of offsets, were made to produce modified equations that considerably improve the hydraulic conductivity estimates from grain size data for beach, dune, offshore marine, and river sediments. Estimated hydraulic conductivity errors were reduced to 6 to 7.1m/day for the beach subgroups, 3.4 to 7.1m/day for dune subgroups, and 2.2 to 11m/day for offshore sediments subgroups. Improvements were made for river environments, but still produced high errors between 13 and 23m/day. © 2013, National Ground Water Association.

  18. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  19. An Analytic Solution to the Computation of Power and Sample Size for Genetic Association Studies under a Pleiotropic Mode of Inheritance.

    Science.gov (United States)

    Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A

    2016-01-01

    Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.

  20. Fundamental study on laser manipulation of contamination particles with determining shape, size and species

    International Nuclear Information System (INIS)

    Shimizu, Isao; Fujii, Taketsugu

    1995-01-01

    It has been desired to eliminate or collect the contamination particles of radioisotope in each sort of species or shape and size non-invasively. The shape and size of particle can be determined from the shape and distribution of diffraction pattern of particle in the parallel laser beam, the species of particle can be discriminated by the fluorescence from resonance of laser beam, or by the laser Raman scattering, and the particle suspended in the air or falling down in a vacuum can be levitated against the gravity and trapped by the radiation force and the trapping force of the focussed laser beam in the atmosphere or in a vacuum. For the purpose of the non-invasive manipulation of contamination particles, the laser manipulation technique, image processing technique with Multiplexed Matched Spatial Filter and the determination technique of laser Raman scattering or fluorescence from resonance of laser light were combined in the experiments. The shape, size and species of particles trapped in the focal plane of focused Ar laser beam can be determined simultaneously and instantaneously from the shape and intensity distributions of diffraction patterns of the particles in the irradiation of parallel coherent beam of He-Ne laser, and fluorescence from the resonance of YAG laser beam with variable wave length. In this research, a new technique is proposed to manipulate non-invasively the contamination particles determined with the shape, size and species in the atmosphere or in a vacuum, by laser beam. (author)

  1. An improved assay for the determination of Huntington`s disease allele size

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, C.; Klinger, K.; Miller, G. [Intergrated Genetics, Framingham, MA (United States)

    1994-09-01

    The hallmark of Huntington`s disease (HD) is the expansion of a polymorphic (CAG)n repeat. Several methods have been published describing PCR amplification of this region. Most of these assays require a complex PCR reaction mixture to amplify this GC-rich region. A consistent problem with trinucleotide repeat PCR amplification is the presence of a number of {open_quotes}stutter bands{close_quotes} which may be caused by primer or amplicon slippage during amplification or insufficient polymerase processivity. Most assays for HD arbitrarily select a particular band for diagnostic purposes. Without a clear choice for band selection such an arbitrary selection may result in inconsistent intra- or inter-laboratory findings. We present an improved protocol for the amplification of the HD trinucleotide repeat region. This method simplifies the PCR reaction buffer and results in a set of easily identifiable bands from which to determine allele size. HD alleles were identified by selecting bands of clearly greater signal intensity. Stutter banding was much reduced thus permitting easy identification of the most relevant PCR product. A second set of primers internal to the CCG polymorphism was used in selected samples to confirm allele size. The mechanism of action of N,N,N trimethylglycine in the PCR reaction is not clear. It may be possible that the minimal isostabilizing effect of N,N,N trimethylglycine at 2.5 M is significant enough to affect primer specificity. The use of N,N,N trimethylglycine in the PCR reaction facilitated identification of HD alleles and may be appropriate for use in other assays of this type.

  2. Neighborhood size and local geographic variation of health and social determinants

    Directory of Open Access Journals (Sweden)

    Emch Michael

    2005-06-01

    Full Text Available Abstract Background Spatial filtering using a geographic information system (GIS is often used to smooth health and ecological data. Smoothing disease data can help us understand local (neighborhood geographic variation and ecological risk of diseases. Analyses that use small neighborhood sizes yield individualistic patterns and large sizes reveal the global structure of data where local variation is obscured. Therefore, choosing an optimal neighborhood size is important for understanding ecological associations with diseases. This paper uses Hartley's test of homogeneity of variance (Fmax as a methodological solution for selecting optimal neighborhood sizes. The data from a study area in Vietnam are used to test the suitability of this method. Results The Hartley's Fmax test was applied to spatial variables for two enteric diseases and two socioeconomic determinants. Various neighbourhood sizes were tested by using a two step process to implement the Fmaxtest. First the variance of each neighborhood was compared to the highest neighborhood variance (upper, Fmax1 and then they were compared with the lowest neighborhood variance (lower, Fmax2. A significant value of Fmax1 indicates that the neighborhood does not reveal the global structure of data, and in contrast, a significant value in Fmax2 implies that the neighborhood data are not individualistic. The neighborhoods that are between the lower and the upper limits are the optimal neighbourhood sizes. Conclusion The results of tests provide different neighbourhood sizes for different variables suggesting that optimal neighbourhood size is data dependent. In ecology, it is well known that observation scales may influence ecological inference. Therefore, selecting optimal neigborhood size is essential for understanding disease ecologies. The optimal neighbourhood selection method that is tested in this paper can be useful in health and ecological studies.

  3. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  4. Determination of rhenium in geologic samples of sandstone-type uranium deposit

    International Nuclear Information System (INIS)

    Li Yanan; Wang Xiuqin; Yin Jinshuang

    1997-01-01

    The thiourea colorimetry method suitable for the determination of samples with rhenium content higher than 5 μg/g is described. The method is characterized by many advantages: stability of analytical results, simplicity and cheapness of reagent, and wide range of analysed samples. The catalytic colorimetry is also applied to determine trace rhenium meeting the demand for comprehensive appraisal of prospecting and exploration, recovery and utilization of rhenium. This method can also be applied to analyse rhenium of other samples

  5. Is patient size important in dose determination and optimization in cardiology?

    International Nuclear Information System (INIS)

    Reay, J; Chapple, C L; Kotre, C J

    2003-01-01

    Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization

  6. Determination of tritium in wine and wine yeast samples

    International Nuclear Information System (INIS)

    Cotarlea, Monica-Ionela; Paunescu, Niculina; Galeriu, D.; Mocanu, N.; Margineanu, R.; Marin, G.

    1997-01-01

    A sensitive method for evaluating the tritium content in wine and wine yeast was applied to estimate tritium impact on the environment in the surrounding area of nuclear power plant Cernavoda, where the vineyards are part of representative agricultural ecosystem. Analytical procedures were developed to determine HTO in wine and wine yeast samples. The content of organic compounds affecting the LSC measurement is reduced by fractionating distillation for wine samples and azeotropic distillation followed by fractional distillation for wine yeast samples. Finally, the water samples obtained after fractional distillation were normally distilled with KMO 4 . The established procedures were successfully applied for wine and wine yeast samples from Mulfatlar harvests of the years 1995 and 1996. (authors)

  7. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    Science.gov (United States)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative

  8. DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION

    Directory of Open Access Journals (Sweden)

    B. Ruf

    2017-08-01

    Full Text Available With the emergence of small consumer Unmanned Aerial Vehicles (UAVs, the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM optimization which is parallelized for general purpose computation on a GPU (GPGPU, reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that

  9. Determining the size of nanoparticles in the example of magnetic iron oxide core-shell systems

    Science.gov (United States)

    Jarzębski, Maciej; Kościński, Mikołaj; Białopiotrowicz, Tomasz

    2017-08-01

    The size of nanoparticles is one of the most important factors for their possible applications. Various techniques for the nanoparticle size characterization are available. In this paper selected techniques will be considered base on the prepared core-shell magnetite nanoparticles. Magnetite is one of the most investigated and developed magnetic material. It shows interesting magnetic properties which can be used for biomedical applications, such as drug delivery, hypothermia and also as a contrast agent. To reduce the toxic effects of Fe3O4, magnetic core was covered by dextran and gelatin. Moreover, the shell was doped by fluorescent dye for confocal microscopy investigation. The main investigation focused on the methods for particles size determination of modified magnetite nanoparticles prepared with different techniques. The size distribution were obtained by nanoparticle tracking analysis, dynamic light scattering and transmission electron microscopy. Furthermore, fluorescent correlation spectroscopy (FCS) and confocal microscopy were used to compare the results for particle size determination of core-shell systems.

  10. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  11. Rapid and automated determination of plutonium and neptunium in environmental samples

    International Nuclear Information System (INIS)

    Qiao, J.

    2011-03-01

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development in this work consists of 5 subjects stated as follows: 1) Development and optimization of an SI-anion exchange chromatographic method for rapid determination of plutonium in environmental samples in combination of inductively coupled plasma mass spectrometry detection (Paper II); (2) Methodology development and optimization for rapid determination of plutonium in environmental samples using SI-extraction chromatography prior to inductively coupled plasma mass spectrometry (Paper III); (3) Development of an SI-chromatographic method for simultaneous determination of plutonium and neptunium in environmental samples (Paper IV); (4) Investigation of the suitability and applicability of 242 Pu as a tracer for rapid neptunium determination using anion exchange chromatography in an SI-network coupled with inductively coupled plasma mass spectrometry (Paper V); (5) Exploration of macro-porous anion exchange chromatography for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including environmental risk monitoring and assessment, emergency preparedness and surveillance of contaminated areas. (Author)

  12. Rapid and automated determination of plutonium and neptunium in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, J.

    2011-03-15

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development in this work consists of 5 subjects stated as follows: 1) Development and optimization of an SI-anion exchange chromatographic method for rapid determination of plutonium in environmental samples in combination of inductively coupled plasma mass spectrometry detection (Paper II); (2) Methodology development and optimization for rapid determination of plutonium in environmental samples using SI-extraction chromatography prior to inductively coupled plasma mass spectrometry (Paper III); (3) Development of an SI-chromatographic method for simultaneous determination of plutonium and neptunium in environmental samples (Paper IV); (4) Investigation of the suitability and applicability of 242Pu as a tracer for rapid neptunium determination using anion exchange chromatography in an SI-network coupled with inductively coupled plasma mass spectrometry (Paper V); (5) Exploration of macro-porous anion exchange chromatography for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including environmental risk monitoring and assessment, emergency preparedness and surveillance of contaminated areas. (Author)

  13. Determination of radioactivity in meat samples

    International Nuclear Information System (INIS)

    Malik, G.M.; Atta, M.A.; Shafiq, M.; Zafar, M.S.

    1993-01-01

    The presence of radionuclides in edibles can create harmful effects in the human body. It is, therefore, essential that the radioactivity must be searched in the food stuff specially in those items which are available near the nuclear installations. The radioactivity in the meat samples obtained from the surroundings of PINSTECH (Pakistan Institute of Nuclear Science and Technology), PINSTECH Complex has been determined using high resolution Ge(Li) gamma ray spectrometer and a low level beta counting system. The results show that the measured values of the radioactivity are below the maximum permissible levels. (author)

  14. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  15. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  16. Determination of nitrite, nitrate and total nitrogen in vegetable samples

    Directory of Open Access Journals (Sweden)

    Manas Kanti Deb

    2007-04-01

    Full Text Available Yellow diazonium cation formed by reaction of nitrite with 6-amino-1-naphthol-3-sulphonic acid is coupled with β-naphthol in strong alkaline medium to yield a pink coloured azo dye. The azo-dyes shows absorption maximum at 510 nm with molar absorptivity of 2.5 ×104 M-1 cm-1. The dye product obeys Beer's law (correlation coefficient = 0.997, in terms of nitrite concentration, up to 2.7 μg NO2 mL-1. The above colour reaction system has been applied successfully for the determination of nitrite, nitrate and total nitrogen in vegetable samples. Unreduced samples give direct measure for nitrite whilst reduction of samples by copperized-cadmium column gives total nitrogen content and their difference shows nitrate content in the samples. Variety of vegetables have been tested for their N-content (NO2-/NO3-/total-N with % RSD ranging between 1.5 to 2.5 % for nitrite determination. The effects of foreign ions in the determination of the nitrite, nitrate, and total nitrogen have been studied. Statistical comparison of the results with those of reported method shows good agreement and indicates no significant difference in precision.

  17. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  18. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    International Nuclear Information System (INIS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-01-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers

  19. Determination of Phenols in Water Samples using a Supported ...

    African Journals Online (AJOL)

    The sample preparation method was tested for the determination of phenols in river water samples and landfill leachate. Concentrations of phenols in river water were found to be in the range 4.2 μg L–1 for 2-chlorophenol to 50 μg L–1 for 4-chlorophenol. In landfill leachate, 4-chlorophenol was detected at a concentration ...

  20. Determination of the stoichiometric ratio uranium dioxide samples

    International Nuclear Information System (INIS)

    Moura, Sergio Carvalho

    1999-01-01

    The determination of the O/U stoichiometric ratio in uranium dioxide is an important parameter in order to qualify nuclear fuels. The excess oxygen in the crystallographic structure can cause changes in the physico-chemical properties of this compound such as variation of the thermal conductivity alterations, fuel plasticity and others, affecting the efficiency of this material when it is utilized as nuclear fuel in the reactor core. The purpose of this work is to evaluate methods for the determination of uranium oxide samples from two different production processes, using gravimetric, voltammetric and X-ray diffraction techniques. After the evaluation of these techniques, the main aspect of this work is to define a reliable methodology in order to characterize the behavior of uranium oxide. The methodology used in this work consisted of two different steps: utilization of gravimetric and volumetric methods in order to determine the ratio in uranium dioxide samples; utilization of X-ray diffraction technique in order to determine the lattice parameters using patterns and application of the Rietveld method during refining of the structural data. As a result of the experimental part of this work it was found that the X-ray diffraction analysis performs better and detects the presence of more phases than gravimetric and voltammetric techniques, not sensitive enough in this detection. (author)

  1. Evaluation of 1H NMR relaxometry for the assessment of pore size distribution in soil samples

    NARCIS (Netherlands)

    Jaeger, F.; Bowe, S.; As, van H.; Schaumann, G.E.

    2009-01-01

    1H NMR relaxometry is used in earth science as a non-destructive and time-saving method to determine pore size distributions (PSD) in porous media with pore sizes ranging from nm to mm. This is a broader range than generally reported for results from X-ray computed tomography (X-ray CT) scanning,

  2. Determination of phosphorus in small amounts of protein samples by ICP-MS.

    Science.gov (United States)

    Becker, J Sabine; Boulyga, Sergei F; Pickhardt, Carola; Becker, J; Buddrus, Stefan; Przybylski, Michael

    2003-02-01

    Inductively coupled plasma mass spectrometry (ICP-MS) is used for phosphorus determination in protein samples. A small amount of solid protein sample (down to 1 micro g) or digest (1-10 micro L) protein solution was denatured in nitric acid and hydrogen peroxide by closed-microvessel microwave digestion. Phosphorus determination was performed with an optimized analytical method using a double-focusing sector field inductively coupled plasma mass spectrometer (ICP-SFMS) and quadrupole-based ICP-MS (ICP-QMS). For quality control of phosphorus determination a certified reference material (CRM), single cell proteins (BCR 273) with a high phosphorus content of 26.8+/-0.4 mg g(-1), was analyzed. For studies on phosphorus determination in proteins while reducing the sample amount as low as possible the homogeneity of CRM BCR 273 was investigated. Relative standard deviation and measurement accuracy in ICP-QMS was within 2%, 3.5%, 11% and 12% when using CRM BCR 273 sample weights of 40 mg, 5 mg, 1 mg and 0.3 mg, respectively. The lowest possible sample weight for an accurate phosphorus analysis in protein samples by ICP-MS is discussed. The analytical method developed was applied for the analysis of homogeneous protein samples in very low amounts [1-100 micro g of solid protein sample, e.g. beta-casein or down to 1 micro L of protein or digest in solution (e.g., tau protein)]. A further reduction of the diluted protein solution volume was achieved by the application of flow injection in ICP-SFMS, which is discussed with reference to real protein digests after protein separation using 2D gel electrophoresis.The detection limits for phosphorus in biological samples were determined by ICP-SFMS down to the ng g(-1) level. The present work discusses the figure of merit for the determination of phosphorus in a small amount of protein sample with ICP-SFMS in comparison to ICP-QMS.

  3. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  4. Preconcentration and determination of heavy metals in water, sediment and biological samples

    Directory of Open Access Journals (Sweden)

    Shirkhanloo Hamid

    2011-01-01

    Full Text Available In this study, a simple, sensitive and accurate column preconcentration method was developed for the determination of Cd, Cu and Pb ions in river water, urine and sediment samples by flame atomic absorption spectrometry. The procedure is based on the retention of the analytes on a mixed cellulose ester membrane (MCEM column from buffered sample solutions and then their elution from the column with nitric acid. Several parameters, such as pH of the sample solution, volume of the sample and eluent and flow rates of the sample were evaluated. The effects of diverse ions on the preconcentration were also investigated. The recoveries were >95 %. The developed method was applied to the determination of trace metal ions in river water, urine and sediment samples, with satisfactory results. The 3δ detection limits for Cu, Pb and Cd were found to be 2, 3 and 0.2 μg dm−3, respectively. The presented procedure was successfully applied for determination of the copper, lead and cadmium contents in real samples, i.e., river water and biological samples.

  5. Quantitative method of determining beryllium or a compound thereof in a sample

    Science.gov (United States)

    McCleskey, T. Mark; Ehler, Deborah S.; John, Kevin D.; Burrell, Anthony K.; Collis, Gavin E.; Minogue, Edel M.; Warner, Benjamin P.

    2010-08-24

    A method of determining beryllium or a beryllium compound thereof in a sample, includes providing a sample suspected of comprising beryllium or a compound thereof, extracting beryllium or a compound thereof from the sample by dissolving in a solution, adding a fluorescent indicator to the solution to thereby bind any beryllium or a compound thereof to the fluorescent indicator, and determining the presence or amount of any beryllium or a compound thereof in the sample by measuring fluorescence.

  6. Determination of uranium in liquid samples

    International Nuclear Information System (INIS)

    Macefat, Martina R.; Grahek, Zeljko; Ivsic, Astrid G.

    2008-01-01

    Full text: Uranium is a natural occurring radionuclide and the first member of natural radioactive chains which makes its determination in natural materials interesting from geochemical and radioecological aspect. It can be quantitatively determined as element and/or its radioisotopes by different spectrometric methods (ICP-MS, spectrophotometry, alpha spectrometry). It is necessary to develop inexpensive, rapid and sensitive methods for the routine analysis. Therefore, in this paper, development of a new method for the isolation of uranium from liquid samples and subsequent determination by spectrophotometry and ICP-MS will be described. It is possible to isolate uranium from drinking and seawater using extraction chromatography or mixed solvent ion exchange. Uranium can be strongly bound on the TRU extraction chromatographic resin from nitric acid (chemical recovery is 100%) and can be separated from other interfering elements, while separation from thorium, which can be also strongly bound on this resin, is possible with hydrochloric acid. It is also possible to separate uranium from thorium on the anion exchanger Amberlite CG-400 (NO 3 - form) because uranium is much more weakly bound on this exchanger from alcoholic solutions of nitric acid. After the separation uranium is determined by ICP-MS and by spectrophotometric method with arsenazo III (λ max =652 nm). Developed method enables selection of the optimal mode of isolation for the given purposes. (author)

  7. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  8. Size exclusion chromatography with online ICP-MS enables molecular weight fractionation of dissolved phosphorus species in water samples.

    Science.gov (United States)

    Venkatesan, Arjun K; Gan, Wenhui; Ashani, Harsh; Herckes, Pierre; Westerhoff, Paul

    2018-04-15

    Phosphorus (P) is an important and often limiting element in terrestrial and aquatic ecosystem. A lack of understanding of its distribution and structures in the environment limits the design of effective P mitigation and recovery approaches. Here we developed a robust method employing size exclusion chromatography (SEC) coupled to an ICP-MS to determine the molecular weight (MW) distribution of P in environmental samples. The most abundant fraction of P varied widely in different environmental samples: (i) orthophosphate was the dominant fraction (93-100%) in one lake, two aerosols and DOC isolate samples, (ii) species of 400-600 Da range were abundant (74-100%) in two surface waters, and (iii) species of 150-350 Da range were abundant in wastewater effluents. SEC-DOC of the aqueous samples using a similar SEC column showed overlapping peaks for the 400-600 Da species in two surface waters, and for >20 kDa species in the effluents, suggesting that these fractions are likely associated with organic matter. The MW resolution and performance of SEC-ICP-MS agreed well with the time integrated results obtained using conventional ultrafiltration method. Results show that SEC in combination with ICP-MS and DOC has the potential to be a powerful and easy-to-use method in identifying unknown fractions of P in the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Determination of thiobencarb in water samples by gas ...

    African Journals Online (AJOL)

    Homogeneous liquid-liquid microextraction via flotation assistance (HLLME-FA) coupled with gas chromatography-flame ionization detection (GC-FID) was applied for the extraction and determination of thiobencarb in water samples. In this study, a special extraction cell was designed to facilitate collection of the ...

  10. Application of immunoaffinity columns for different food item samples preparation in micotoxins determination

    Directory of Open Access Journals (Sweden)

    Ćurčić Marijana

    2016-01-01

    Full Text Available In analytical methods used for monitoring of what special attention is paid to sample preparation. Therefore, the objective of this study was testing the efficiency of immunoaffinity columns (IAC that are based on solid phase extraction principles used for samples preparation in determining aflatoxins and ochratoxins. Aflatoxins and ochratoxins concentrations were determined in totally 56 samples of food items: wheat, corn, rice, barley and other grains (19 samples, flour and flour products from grain and additives for the bakery industry (7 samples, fruits and vegetables (3 samples, hazelnut, walnut, almond, coconut flour (4 samples, roasted cocoa beans, peanuts, tea, coffee (16 samples, spices (4 samples and meat and meat products (4 samples. Obtained results indicate advantage of IAC use for sample preparation based on enhanced specificity due to binding of extracted molecules to incorporated specific antibodies and rinsing the rest molecules from sample which could interfere with further analysis. Additional advantage is the usage of small amount of organic solvents and consequently decreased exposure of staff who conduct micotoxins determination. Of special interest is increase in method sensitivity since limit of quantification for aflatoxins and ochratoxins determination method is lower than maximal allowed concentration of these toxines prescribed by national rule book.

  11. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Determination of uranium and its isotopic ratios in environmental samples

    International Nuclear Information System (INIS)

    Flues Szeles, M.S.M.

    1990-01-01

    A method for the determination of uranium and its isotopic ratios ( sup(235)U/ sup(238)U and sup(234U/ sup(238)U) is established in the present work. The method can be applied in environmental monitoring programs of uranium enrichment facilities. The proposed method is based on the alpha spectrometry technique which is applied after a purification of the sample by using an ionic exchange resin. The total yield achieved was (91 + 5)% with a precision of 5%, an accuracy of 8% and a lower limit of detection of 7,9 x 10 sup(-4)Bq. The uranium determination in samples containing high concentration of iron, which is an interfering element present in environmental samples, particularly in soil and sediment, was also studied. The results obtained by using artificial samples containing iron and uranium in the ratio 1000:1, were considered satisfactory. (author)

  13. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  14. Choice of Sample Split in Out-of-Sample Forecast Evaluation

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Timmermann, Allan

    , while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictabil- ity of stock returns......Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values...... and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined....

  15. Fast and effective determination of strontium-90 in high volumes water samples

    International Nuclear Information System (INIS)

    Basarabova, B.; Dulanska, S.

    2014-01-01

    A simple and fast method was developed for determination of 90 Sr in high volumes of water samples from vicinity of nuclear power facilities. Samples were taken from the environment near Nuclear Power Plants in Jaslovske Bohunice and Mochovce in Slovakia. For determination of 90 Sr was used solid phase extraction using commercial sorbent Analig R Sr-01 from company IBC Advanced Technologies, Inc.. Determination of 90 Sr was performed with dilute solution of HNO 3 (1.5-2 M) and also tested in base medium with NaOH. For elution of 90 Sr was used eluent EDTA with pH in range 8-9. To achieve fast determination, automation was applied, which brings significant reduction of separation time. Concentration of water samples with evaporation was not necessary. Separation was performed immediately after filtration of analyzed samples. The aim of this study was development of less expensive, time unlimited and energy saving method for determination of 90 Sr in comparison with conventional methods. Separation time for fast-flow with volume of 10 dm 3 of water samples was 3.5 hours (flow-rate approximately 3.2 dm 3 / 1 hour). Radiochemical strontium yield was traced by using radionuclide 85 Sr. Samples were measured with HPGe detector (High-purity Germanium detector) at energy E φ = 514 keV. By using Analig R Sr-01 yields in range 72 - 96 % were achieved. Separation based on solid phase extraction using Analig R Sr-01 employing utilization of automation offers new, fast and effective method for determination of 90 Sr in water matrix. After ingrowth of yttrium samples were measured by Liquid Scintillation Spectrometer Packard Tricarb 2900 TR with software Quanta Smart. (authors)

  16. Two media method for linear attenuation coefficient determination of irregular soil samples

    International Nuclear Information System (INIS)

    Vici, Carlos Henrique Georges

    2004-01-01

    In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient (μ) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the μ determination. It consists of the μ determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of μ was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)

  17. Optical methods for microstructure determination of doped samples

    Science.gov (United States)

    Ciosek, Jerzy F.

    2008-12-01

    The optical methods to determine refractive index profile of layered materials are commonly used with spectroscopic ellipsometry or transmittance/reflectance spectrometry. Measurements of spectral reflection and transmission usually permit to characterize optical materials and determine their refractive index. However, it is possible to characterize of samples with dopants, impurities as well as defects using optical methods. Microstructures of a hydrogenated crystalline Si wafer and a layer of SiO2 - ZrO2 composition are investigated. The first sample is a Si(001):H Czochralski grown single crystalline wafer with 50 nm thick surface Si02 layer. Hydrogen dose implantation (D continue to be an important issue in microelectronic device and sensor fabrication. Hydrogen-implanted silicon (Si: H) has become a topic of remarkable interest, mostly because of the potential of implantation-induced platelets and micro-cavities for the creation of gettering -active areas and for Si layer splitting. Oxygen precipitation and atmospheric impurity are analysed. The second sample is the layer of co-evaporated SiO2 and ZrO2 materials using simultaneously two electron beam guns in reactive evaporation methods. The composition structure was investigated by X-Ray photoelectron spectroscopy (XPS), and spectroscopic ellipsometry methods. A non-uniformity and composition of layer are analysed using average density method.

  18. Determination of the growth restriction factor and grain size for aluminum alloys by a quasi-binary equivalent method

    International Nuclear Information System (INIS)

    Mitrašinović, A.M.; Robles Hernández, F.C.

    2012-01-01

    Highlights: ► A new method to determine the growth restricting factor. (Q) is proposed ► The proposed method is highly accurate (R 2 = 0.99) and simple. ► A major novelty of this method is the determination of Q for non-dilute samples. ► The method proposed herein is based on quasi-binary phase diagrams and composition. ► This method can be easily implemented industrially or as a research tool. - Abstract: In the present research paper is suggested a new methodology to determine the growth restricting factor (Q) and grain size (GS) for various Al-alloys. The present method combines a thermodynamical component based on the liquidus behavior of each alloying element that is later incorporated into the well known growth restricting models for multi-component alloys. This approach that can be used to determine Q and/or GS based on the chemical composition and the slope of the liquidus temperature of any Al-alloy solidified in close to equilibrium conditions. This method can be modified further in order to assess the effect of cooling rate or thermomechanical processing on growth restricting factor and grain size. In the present paper is proposed a highly accurate (R 2 = 0.99) and validated model for Al–Si alloys, but it can be modified for any other Al–X alloying system. The present method can be used for alloys with relatively high solute content and due to the use of the thermodynamics of liquidus this system considers the poisoning effects of single and multi-component alloying elements.

  19. Determination of gamma emitting radionuclides in environmental air and precipitation samples with a Ge(Li) detector

    International Nuclear Information System (INIS)

    Hoetzl, H.; Rosner, G.; Winkler, R.; Sansoni, B.

    1977-01-01

    The concentrations of the radionuclides 7 Be, 54 Mn, 95 Zr, 95 Nb, 103 Ru, 106 Ru, 125 Sb, 137 Cs, 140 Ba/ 140 La, 141 Ce and 144 Ce in ground level air and of 7 Be, 95 Zr, 137 Cs and 144 Ce in precipitation were determined since 1970 and 1971 respectively at Neuherberg, 10 km north of Munich, by gamma spectrometry using a 60 cm 3 Ge(Li) detector. Dust samples were collected twice a month 1 m above ground from about 40,000 m 3 of air on 46 cm x 28 cm microsorbane filters and pressed to small cylinders of 35 cm 3 in size. Sensitivity of the procedure is of the order of 1 fCi/m 3 for air and of 10 pCi/m 2 per month for precipitation samples at a counting time of 1500 min. (author)

  20. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    Science.gov (United States)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.